EN FR
EN FR
2022
Activity report
Project-Team
HYBRID
RNSR: 201322122U
In partnership with:
Institut national des sciences appliquées de Rennes, Université Rennes 1, CNRS
Team name:
3D interaction with virtual environments using body and mind
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Domain
Perception, Cognition and Interaction
Theme
Interaction and visualization
Creation of the Project-Team: 2013 July 01

Keywords

Computer Science and Digital Science

  • A2.5. Software engineering
  • A5.1. Human-Computer Interaction
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.3. Haptic interfaces
  • A5.1.4. Brain-computer interfaces, physiological computing
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.7. Multimodal interfaces
  • A5.1.8. 3D User Interfaces
  • A5.1.9. User and perceptual studies
  • A5.2. Data visualization
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A6. Modeling, simulation and control
  • A6.2. Scientific computing, Numerical Analysis & Optimization
  • A6.3. Computation-data interaction

Other Research Topics and Application Domains

  • B1.2. Neuroscience and cognitive science
  • B2.4. Therapies
  • B2.5. Handicap and personal assistances
  • B2.6. Biological and medical imaging
  • B2.8. Sports, performance, motor skills
  • B5.1. Factory of the future
  • B5.2. Design and manufacturing
  • B5.8. Learning and training
  • B5.9. Industrial maintenance
  • B6.4. Internet of things
  • B8.1. Smart building/home
  • B8.3. Urbanism and urban planning
  • B9.1. Education
  • B9.2. Art
  • B9.2.1. Music, sound
  • B9.2.2. Cinema, Television
  • B9.2.3. Video games
  • B9.4. Sports
  • B9.6.6. Archeology, History

1 Team members, visitors, external collaborators

Research Scientists

  • Anatole Lecuyer [Team leader, Inria, Senior Researcher, HDR]
  • Ferran Argelaguet [Inria, Researcher, HDR]
  • Marc Macé [CNRS, Researcher, HDR]
  • Léa Pillette [CNRS, Researcher, from Nov 2022]

Faculty Members

  • Bruno Arnaldi [INSA Rennes, Emeritus, from Nov 2022, HDR]
  • Bruno Arnaldi [INSA Rennes, Professor, until Nov 2022, HDR]
  • Mélanie Cogné [Université Rennes I, Associate Professor, CHU Rennes]
  • Valérie Gouranton [INSA Rennes, Associate Professor, HDR]

Post-Doctoral Fellows

  • Elodie Bouzbib [Inria, Co-supervised with RAINBOW]
  • Panagiotis Kourtesis [Inria, until Sep 2022]

PhD Students

  • Antonin Cheymol [Inria, INSA Rennes]
  • Gwendal Fouché [Inria, INSA Rennes]
  • Adelaïde Genay [Inria, Co-supervised with POTIOC]
  • Vincent Goupil [SOGEA Bretagne, INSA Rennes]
  • Lysa Gramoli [ORANGE LABS, INSA Rennes]
  • Jeanne Hecquard [Inria, Université Rennes I, from Oct 2022]
  • Gabriela Herrera Altamira [Inria, Co-supervised with Loria]
  • Emilie Hummel [Inria, INSA Rennes]
  • Salome Le Franc [CHRU Rennes, until Oct 2022, Université Rennes I]
  • Julien Lomet [Université Paris 8, Université Rennes 2]
  • Maé Mavromatis [Inria, INSA Rennes]
  • Yann Moullec [Université Rennes I]
  • Grégoire Richard [Inria, Co-supervised with LOKI]
  • Mathieu Risy [INSA Rennes, from Oct 2022]
  • Sony Saint-Auret [Inria, from Oct 2022, INSA Rennes]
  • Emile Savalle [Université Rennes I, from Oct 2022]
  • Sebastian Santiago Vizcay [Inria, INSA Rennes]

Technical Staff

  • Alexandre Audinot [INSA Rennes, Engineer]
  • Ronan Gaugne [Université Rennes I, Engineer, Technical director of Immersia]
  • Florian Nouviale [INSA Rennes, Engineer]
  • Adrien Reuzeau [INSA Rennes, Engineer, from Oct 2022]
  • Justine Saint-Aubert [Inria, Engineer]

Administrative Assistant

  • Nathalie Denis [Inria]

Visiting Scientist

  • Yutaro Hirao [University of Tokyo, until Sep 2022]

External Collaborators

  • Rebecca Fribourg [Centrale Nantes]
  • Guillaume Moreau [IMT-Atlantique, Professor, HDR]
  • Jean-Marie Normand [Centrale Nantes, HDR]

2 Overall objectives

Our research project belongs to the scientific field of Virtual Reality (VR) and 3D interaction with virtual environments. VR systems can be used in numerous applications such as for industry (virtual prototyping, assembly or maintenance operations, data visualization), entertainment (video games, theme parks), arts and design (interactive sketching or sculpture, CAD, architectural mock-ups), education and science (physical simulations, virtual classrooms), or medicine (surgical training, rehabilitation systems). A major change that we foresee in the next decade concerning the field of Virtual Reality relates to the emergence of new paradigms of interaction (input/output) with Virtual Environments (VE).

As for today, the most common way to interact with 3D content still remains by measuring user's motor activity, i.e., his/her gestures and physical motions when manipulating different kinds of input device. However, a recent trend consists in soliciting more movements and more physical engagement of the body of the user. We can notably stress the emergence of bimanual interaction, natural walking interfaces, and whole-body involvement. These new interaction schemes bring a new level of complexity in terms of generic physical simulation of potential interactions between the virtual body and the virtual surrounding, and a challenging "trade-off" between performance and realism. Moreover, research is also needed to characterize the influence of these new sensory cues on the resulting feelings of "presence" and immersion of the user.

Besides, a novel kind of user input has recently appeared in the field of virtual reality: the user's mental activity, which can be measured by means of a "Brain-Computer Interface" (BCI). Brain-Computer Interfaces are communication systems which measure user's electrical cerebral activity and translate it, in real-time, into an exploitable command. BCIs introduce a new way of interacting "by thought" with virtual environments. However, current BCI can only determine a small amount of mental states and hence a small number of mental commands. Thus, research is still needed here to extend the capacities of BCI, and to better exploit the few available mental states in virtual environments.

Our first motivation consists thus in designing novel “body-based” and “mind-based” controls of virtual environments and reaching, in both cases, more immersive and more efficient 3D interaction.

Furthermore, in current VR systems, motor activities and mental activities are always considered separately and exclusively. This reminds the well-known “body-mind dualism” which is at the heart of historical philosophical debates. In this context, our objective is to introduce novel “hybrid” interaction schemes in virtual reality, by considering motor and mental activities jointly, i.e., in a harmonious, complementary, and optimized way. Thus, we intend to explore novel paradigms of 3D interaction mixing body and mind inputs. Moreover, our approach becomes even more challenging when considering and connecting multiple users which implies multiple bodies and multiple brains collaborating and interacting in virtual reality.

Our second motivation consists thus in introducing a “hybrid approach” which will mix mental and motor activities of one or multiple users in virtual reality.

3 Research program

The scientific objective of Hybrid team is to improve 3D interaction of one or multiple users with virtual environments, by making full use of physical engagement of the body, and by incorporating the mental states by means of brain-computer interfaces. We intend to improve each component of this framework individually and their subsequent combinations.

The “hybrid” 3D interaction loop between one or multiple users and a virtual environment is depicted in Figure 1. Different kinds of 3D interaction situations are distinguished (red arrows, bottom): 1) body-based interaction, 2) mind-based interaction, 3) hybrid and/or 4) collaborative interaction (with at least two users). In each case, three scientific challenges arise which correspond to the three successive steps of the 3D interaction loop (blue squares, top): 1) the 3D interaction technique, 2) the modeling and simulation of the 3D scenario, and 3) the design of appropriate sensory feedback.

Figure 1
Figure 1: 3D hybrid interaction loop between one or multiple users and a virtual reality system. Top (in blue) three steps of 3D interaction with a virtual environment: (1-blue) interaction technique, (2-blue) simulation of the virtual environment, (3-blue) sensory feedbacks. Bottom (in red) different cases of interaction: (1-red) body-based, (2-red) mind-based, (3-red) hybrid, and (4-red) collaborative 3D interaction.

The 3D interaction loop involves various possible inputs from the user(s) and different kinds of output (or sensory feedback) from the simulated environment. Each user can involve his/her body and mind by means of corporal and/or brain-computer interfaces. A hybrid 3D interaction technique (1) mixes mental and motor inputs and translates them into a command for the virtual environment. The real-time simulation (2) of the virtual environment is taking into account these commands to change and update the state of the virtual world and virtual objects. The state changes are sent back to the user and perceived through different sensory feedbacks (e.g., visual, haptic and/or auditory) (3). These sensory feedbacks are closing the 3D interaction loop. Other users can also interact with the virtual environment using the same procedure, and can eventually “collaborate” using “collaborative interactive techniques” (4).

This description is stressing three major challenges which correspond to three mandatory steps when designing 3D interaction with virtual environments:

  • 3D interaction techniques: This first step consists in translating the actions or intentions of the user (inputs) into an explicit command for the virtual environment. In virtual reality, the classical tasks that require such kinds of user command were early classified into four  47: navigating the virtual world, selecting a virtual object, manipulating it, or controlling the application (entering text, activating options, etc). However, adding a third dimension and using stereoscopic rendering along with advanced VR interfaces cause many 2D techniques to become inappropriate. It is thus necessary to design specific interaction techniques and adapted tools. This challenge is here renewed by the various kinds of 3D interaction which are targeted. In our case, we consider various situations, with motor and/or cerebral inputs, and potentially multiple users.
  • Modeling and simulation of complex 3D scenarios: This second step corresponds to the update of the state of the virtual environment, in real-time, in response to all the potential commands or actions sent by the user. The complexity of the data and phenomena involved in 3D scenarios is constantly increasing. It corresponds for instance to the multiple states of the entities present in the simulation (rigid, articulated, deformable, fluids, which can constitute both the user’s virtual body and the different manipulated objects), and the multiple physical phenomena implied by natural human interactions (squeezing, breaking, melting, etc). The challenge consists here in modeling and simulating these complex 3D scenarios and meeting, at the same time, two strong constraints of virtual reality systems: performance (real-time and interactivity) and genericity (e.g., multi-resolution, multi-modal, multi-platform, etc).
  • Immersive sensory feedbacks: This third step corresponds to the display of the multiple sensory feedbacks (output) coming from the various VR interfaces. These feedbacks enable the user to perceive the changes occurring in the virtual environment. They are closing the 3D interaction loop, making the user immersed, and potentially generating a subsequent feeling of presence. Among the various VR interfaces which have been developed so far we can stress two kinds of sensory feedback: visual feedback (3D stereoscopic images using projection-based systems such as CAVE systems or Head Mounted Displays); and haptic feedback (related to the sense of touch and to tactile or force-feedback devices). The Hybrid team has a strong expertice in haptic feedback, and in the design of haptic and “pseudo-haptic” rendering 48. Note that a major trend in the community, which is strongly supported by the Hybrid team, relates to a “perception-based” approach, which aims at designing sensory feedbacks which are well in line with human perceptual capabilities.

These three scientific challenges are addressed differently according to the context and the user inputs involved. We propose to consider three different contexts, which correspond to the three different research axes of the Hybrid research team, namely: 1) body-based interaction (motor input only), 2) mind-based interaction (cerebral input only), and then 3) hybrid and collaborative interaction (i.e., the mixing of body and brain inputs from one or multiple users).

3.1 Research Axes

The scientific activity of Hybrid team follows three main axes of research:

  • Body-based interaction in virtual reality. Our first research axis concerns the design of immersive and effective "body-based" 3D interactions, i.e., relying on a physical engagement of the user’s body. This trend is probably the most popular one in VR research at the moment. Most VR setups make use of tracking systems which measure specific positions or actions of the user in order to interact with a virtual environment. However, in recent years, novel options have emerged for measuring “full-body” movements or other, even less conventional, inputs (e.g. body equilibrium). In this first research axis we focus on new emerging methods of “body-based interaction” with virtual environments. This implies the design of novel 3D user interfaces and 3D interactive techniques, new simulation models and techniques, and innovant sensory feedbacks for body-based interaction with virtual worlds. It involves real-time physical simulation of complex interactive phenomena, and the design of corresponding haptic and pseudo-haptic feedback.
  • Mind-based interaction in virtual reality. Our second research axis concerns the design of immersive and effective “mind-based” 3D interactions in Virtual Reality. Mind-based interaction with virtual environments relies on Brain-Computer Interface technology, which corresponds to the direct use of brain signals to send “mental commands” to an automated system such as a robot, a prosthesis, or a virtual environment. BCI is a rapidly growing area of research and several impressive prototypes are already available. However, the emergence of such a novel user input is also calling for novel and dedicated 3D user interfaces. This implies to study the extension of the mental vocabulary available for 3D interaction with VEs, the design of specific 3D interaction techniques “driven by the mind” and, last, the design of immersive sensory feedbacks that could help improve the learning of brain control in VR.
  • Hybrid and collaborative 3D interaction. Our third research axis intends to study the combination of motor and mental inputs in VR, for one or multiple users. This concerns the design of mixed systems, with potentially collaborative scenarios involving multiple users, and thus, multiple bodies and multiple brains sharing the same VE. This research axis therefore involves two interdependent topics: 1) collaborative virtual environments, and 2) hybrid interaction. It should end up with collaborative virtual environments with multiple users, and shared systems with body and mind inputs.

4 Application domains

4.1 Overview

The research program of the Hybrid team aims at next generations of virtual reality and 3D user interfaces which could possibly address both the “body” and “mind” of the user. Novel interaction schemes are designed, for one or multiple users. We target better integrated systems and more compelling user experiences.

The applications of our research program correspond to the applications of virtual reality technologies which could benefit from the addition of novel body-based or mind-based interaction capabilities:

  • Industry: with training systems, virtual prototyping, or scientific visualization;
  • Medicine: with rehabilitation and reeducation systems, or surgical training simulators;
  • Entertainment: with movie industry, content customization, video games or attractions in theme parks,
  • Construction: with virtual mock-ups design and review, or historical/architectural visits.
  • Cultural Heritage: with acquisition, virtual excavation, virtual reconstruction and visualization

5 Social and environmental responsibility

5.1 Impact of research results

A salient initiative carried out by Hybrid in relation to social responsibility on the field of health is the Inria Covid-19 project “VERARE”. VERARE is a unique and innovative concept implemented in record time thanks to a close collaboration between the Hybrid research team and the teams from the intensive care and physical and rehabilitation medicine departments of Rennes University Hospital. VERARE consists in using virtual environments and VR technologies for the rehabilitation of Covid-19 patients, coming out of coma, weakened, and with strong difficulties in recovering walking. With VERARE, the patient is immersed in different virtual environments using a VR headset. He is represented by an “avatar”, carrying out different motor tasks involving his lower limbs, for example : walking, jogging, avoiding obstacles, etc. Our main hypothesis is that the observation of such virtual actions, and the progressive resumption of motor activity in VR, will allow a quicker start to rehabilitation, as soon as the patient leaves the ICU. The patient will then be able to carry out sessions in his room, or even from his hospital bed, in simple and secure conditions, hoping to obtain a final clinical benefit, either in terms of motor and walking recovery or in terms of hospital length of stay. The project started at the end of April 2020, and we were able to deploy a first version of our application at the Rennes hospital in mid-June 2020 only 2 months after the project started. Covid patients are now using our virtual reality application at the Rennes University Hospital, and the clinical evaluation of VERARE is still on-going and expected to be achieved and completed in 2023. The project is also pushing the research activity of Hybrid on many aspects, eg haptics, avatars, and VR user experience, with 3 papers published in IEEE TVCG in 2022.

6 Highlights of the year

6.1 Salient news

  • Arrival of Marc Macé (CR CNRS) from IRIT Lab (January)
  • Habilitation thesis of Valérie Gouranton (March)
  • Arrival of Léa Pillette as new CNRS research scientist in the team (November)
  • Emeritus of Bruno Arnaldi (November)

6.2 Awards

  • IEEE VGTC Virtual Reality Academy: induction of Anatole Lécuyer (2022).
  • IEEE VR 3DUI Contest 2022: Third Place for Hybrid team.
  • IEEE TVCG Best Associate Editor, Honorary Mention Award for Anatole Lécuyer.
  • Best Paper Award at ICAT-EGVE 2022 Conference for the paper “Manipulating the Sense of Embodiment in Virtual Reality: a study of the interactions between the senses of agency, self-location and ownership” authored by Martin Guy, Camille Jeunet-Kelway, Guillaume Moreau and Jean-Marie Normand.
  • Laureate of “Trophée Valorisation du Campus d’Innovation de Rennes” in category “Numérique” (Bruno Arnaldi, Valérie Gouranton, Florian Nouviale).
  • “Open-Science Award” for OpenViBE software in category "Documentation".

7 New software and platforms

7.1 New software

7.1.1 OpenVIBE

  • Keywords:
    Neurosciences, Interaction, Virtual reality, Health, Real time, Neurofeedback, Brain-Computer Interface, EEG, 3D interaction
  • Functional Description:
    OpenViBE is a free and open-source software platform devoted to the design, test and use of Brain-Computer Interfaces (BCI). The platform consists of a set of software modules that can be integrated easily and efficiently to design BCI applications. The key features of OpenViBE software are its modularity, its high performance, its portability, its multiple-user facilities and its connection with high-end/VR displays. The designer of the platform enables users to build complete scenarios based on existing software modules using a dedicated graphical language and a simple Graphical User Interface (GUI). This software is available on the Inria Forge under the terms of the AGPL licence, and it was officially released in June 2009. Since then, the OpenViBE software has already been downloaded more than 60000 times, and it is used by numerous laboratories, projects, or individuals worldwide. More information, downloads, tutorials, videos, documentations are available on the OpenViBE website.
  • Release Contributions:

    Added: - Metabox to perform log of signal power - Artifacted files for algorithm tests

    Changed: - Refactoring of CMake build process - Update wildcards in gitignore - Update CSV File Writer/Reader - Stimulations only

    Removed: - Ogre games and dependencies - Mensia distribution

    Fixed: - Intermittent compiler bug

  • News of the Year:

    Python2 support dropped in favour of Python3 New feature boxes: - Riemannian geometry - Multimodal Graz visualisation - Artefact detection - Features selection - Stimulation validator

    Support for Ubuntu 18.04 Support for Fedora 31

  • URL:
  • Contact:
    Anatole Lecuyer
  • Participants:
    Cedric Riou, Thierry Gaugry, Anatole Lecuyer, Fabien Lotte, Jussi Lindgren, Laurent Bougrain, Maureen Clerc Gallagher, Théodore Papadopoulo, Thomas Prampart
  • Partners:
    INSERM, GIPSA-Lab

7.1.2 Xareus

  • Name:
    Xareus
  • Keywords:
    Virtual reality, Augmented reality, 3D, 3D interaction, Behavior modeling, Interactive Scenarios
  • Scientific Description:
    Xareus mainly contains a scenario engine (#SEVEN) and a relation engine (#FIVE) #SEVEN is a model and an engine based on petri nets extended with sensors and effectors, enabling the description and execution of complex and interactive scenarios #FIVE is a framework for the development of interactive and collaborative virtual environments. #FIVE was developed to answer the need for an easier and a faster design and development of virtual reality applications. #FIVE provides a toolkit that simplifies the declaration of possible actions and behaviours of objects in a VE. It also provides a toolkit that facilitates the setting and the management of collaborative interactions in a VE. It is compliant with a distribution of the VE on different setups. It also proposes guidelines to efficiently create a collaborative and interactive VE.
  • Functional Description:
    Xareus is implemented in C# and is available as libraries. An integration to the Unity3D engine, also exists. The user can focus on domain-specific aspects for his/her application (industrial training, medical training, etc) thanks to Xareus modules. These modules can be used in a vast range of domains for augmented and virtual reality applications requiring interactive environments and collaboration, such as in training. The scenario engine based on Petri nets with the addition of sensors and effectors that allow the execution of complex scenarios for driving Virtual Reality applications. Xareus comes with a scenario editor integrated to Unity 3D for creating, editing and remotely controlling and running scenarios. The relation engine contains software modules that can be interconnected and helps in building interactive and collaborative virtual environments.
  • Release Contributions:
    This version is up to date with Unity 3D 2021.3 LTS and gathers previously separate tools #FIVE and #SEVEN
  • News of the Year:
    #FIVE and #SEVEN tools have been gathered in the same software named Xareus. It was updated to be compatible with the last version and capabilities of Unity3D 2021.3 LTS
  • URL:
  • Publications:
  • Contact:
    Valerie Gouranton
  • Participants:
    Florian Nouviale, Valerie Gouranton, Bruno Arnaldi, Vincent Goupil, Carl-Johan Jorgensen, Emeric Goga, Adrien Reuzeau, Alexandre Audinot

7.1.3 AvatarReady

  • Name:
    A unified platform for the next generation of our virtual selves in digital worlds
  • Keywords:
    Avatars, Virtual reality, Augmented reality, Motion capture, 3D animation, Embodiment
  • Scientific Description:
    AvatarReady is an open-source tool (AGPL) written in C#, providing a plugin for the Unity 3D software to facilitate the use of humanoid avatars for mixed reality applications. Due to the current complexity of semi-automatically configuring avatars coming from different origins, and using different interaction techniques and devices, AvatarReady aggregates several industrial solutions and results from the academic state of the art to propose a simple and fast way to use humanoid avatars in mixed reality in a seamless way. For example, it is possible to automatically configure avatars from different libraries (e.g., rocketbox, character creator, mixamo), as well as to easily use different avatar control methods (e.g., motion capture, inverse kinematics). AvatarReady is also organized in a modular way so that scientific advances can be progressively integrated into the framework. AvatarReady is furthermore accompanied by a utility to generate ready-to-use avatar packages that can be used on the fly, as well as a website to display them and offer them for download to users.
  • Functional Description:
    AvatarReady is a Unity tool to facilitate the configuration and use of humanoid avatars for mixed reality applications. It comes with a utility to generate ready-to-use avatar packages and a website to display them and offer them for download.
  • URL:
  • Authors:
    Ludovic Hoyet, Fernando Argelaguet Sanz, Adrien Reuzeau
  • Contact:
    Ludovic Hoyet

7.1.4 ElectroStim

  • Keywords:
    Virtual reality, Unity 3D, Electrotactility, Sensory feedback
  • Scientific Description:
    ElectroStim provides an agnostic haptic rendering framework able to exploit electrical stimulation capabilities, test quickly different prototypes of electrodes, and have a fast and easy way to author electrotactile sensations so they can quickly be compared when used as tactile feedback in VR interactions. The framework was designed to exploited electrotactile tactile feedback but it can also be extended to other tactile rendering system such as vibrotactile feedback. Furthermore, it is designed to be easily extendable to other types of haptic sensations.
  • Functional Description:
    This software provides the tools necessary to control an electrotactile stimulator in Unity 3D. The software allows precise control of the system to generate tactile sensations in virtual reality applications.
  • Publication:
  • Authors:
    Sebastian Santiago Vizcay, Fernando Argelaguet Sanz
  • Contact:
    Fernando Argelaguet Sanz

7.2 New platforms

7.2.1 Immerstar

Participants: Florian Nouviale, Ronan Gaugne.

URL: Immersia website

With the two virtual reality technological platforms Immersia and Immermove, grouped under the name Immerstar, the team has access to high-level scientific facilities. This equipment benefits the research teams of the center and has allowed them to extend their local, national and international collaborations. The Immerstar platform was granted by a CPER-Inria funding for the 2015-2019 period which had enabled several important evolutions. In particular, in 2018, a haptic system covering the entire volume of the Immersia platform was installed, allowing various configurations from single haptic device usage to dual haptic devices usage with either one or two users. In addition, a motion platform designed to introduce motion feedback for powered wheelchair simulations has also been incorporated (see Figure 2).

We celebrated the twentieth anniversary of the Immersia platform in November 2019 by inaugurating the new haptic equipment. We proposed scientific presentations and received 150 participants, and visits for the support services in which we received 50 persons.

Based on these support, in 2020, we participated to a PIA3-Equipex+ proposal that obtained a funding in 2021. This proposal CONTINUUM involves 22 partner, has been succesfully evaluated and will be granted. The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding.

The Immerstar platform is involved in a new National Research Infrastructure since the end of 2021. This new research infrastructure gathers the main platforms of CONTINUUM.

Immerstar is also involved in EUR Digisport led by University of Rennes 2 and PIA4 DemoES AIR led by University of Rennes 1.

Figure 2
Figure 2: Immersia platform: (Left) “Scale-One” Haptic system for one or two users. (Right) Motion platform for a powered wheelchair simulation.

8 New results

8.1 Virtual Reality Tools and Usages

8.1.1 Assistive Robotic Technologies for Next-Generation Smart Wheelchairs: Codesign and Modularity to Improve Users' Quality of Life.

Participants: Valérie Gouranton [contact].

This work 23 describes the robotic assistive technologies developed for users of electrically powered wheelchairs, within the framework of the European Union’s Interreg ADAPT (Assistive Devices for Empowering Disabled People Through Robotic Technologies) project. In particular, special attention is devoted to the integration of advanced sensing modalities and the design of new shared control algorithms. In response to the clinical needs identified by our medical partners, two novel smart wheelchairs with complementary capabilities and a virtual reality (VR)-based wheelchair simulator have been developed. These systems have been validated via extensive experimental campaigns in France (see Figure 3) and the United Kingdom.

This work was done in collaboration with Rainbow Team, MIS - Modélisation Information et Systèmes - UR UPJV 4290, University College of London, Pôle Saint-Hélier - Médecine Physique et de Réadaptation [Rennes], CNRS-AIST JRL - Joint Robotics Laboratory and IRSEEM - Institut de Recherche en Systèmes Electroniques Embarqués

Figure 3
Figure 3: The wheelchair simulator tested by a volunteer in immersive conditions.

8.1.2 The Rubber Slider Metaphor: Visualisation of Temporal and Geolocated Data

Participants: Ferran Argelaguet [contact], Antonin Cheymol, Gwendal Fouché, Lysa Gramoli, Yutaro Hirao, Emilie Hummel, Maé Mavromatis, Yann Moullec, Florian Nouviale.

In the context of the IEEE VR 2022 3DUI Contest entitled ”Arts, Science, Information, and Knowledge - Visualized and Interacted”, this work 30 presents a VR application to highlight the usage of the rubber slider metaphor. The rubber slider is an augmentation of usual 2D slider controls where users can bend the slider axis in order to control an additional degree of freedom value in the application (see Figure 4). This demonstration immerses users in a Virtual Environment where they will be able to explore a database of art pieces geolocated on a 3D earth model and their corresponding art movements displayed on a timeline interface. Our application obtained the third place in the contest.

Figure 4
Figure 4: Overview of the application with the earth model showing art pieces being selected with the ray (left), the timeline showing art movements (bottom left) and the selected art piece description panel (right).

8.1.3 Timeline Design Space for Immersive Exploration of Time-Varying Spatial 3D Data

Participants: Ferran Argelaguet [contact], Gwendal Fouché.

In the context of the Inria Challenge Naviscope, we have explored the usages of virtual reality for the visualization of biological data. Microscopy image observation is commonly performed on 2D screens, which limits human capacities to grasp volumetric, complex, and discrete biological dynamics. First, we published a perspective paper 28 providing an overall discussion of potential benefits and usages of virtual and augmented reality. Secondly, we explored how we could leverage virtual reality to improve the exploration of such datasets using timeline visualization. Timelines are common visualizations to represent and manipulate temporal data. However, timeline visualizations rarely consider spatio-temporal 3D data (e.g. mesh or volumetric models) directly. In this work 32, leveraging the increased workspace and 3D interaction capabilities of virtual reality (VR), we first propose a timeline design space for 3D temporal data extending the timeline design space proposed by Brehmer et al. The proposed design space adapts the scale, layout and representation dimensions to account for the depth dimension and how the 3D temporal data can be partitioned and structured. Moreover, an additional dimension is introduced, the support, which further characterizes the 3D dimension of the visualization. The design space is complemented by discussing the interaction methods required for the efficient visualization of 3D timelines in VR (see Figure 5). Secondly, we evaluate the benefits of 3D timelines through a formal evaluation (n=21). Taken together, our results showed that time-related tasks can be achieved more comfortably using timelines, and more efficiently for specific tasks requiring the analysis of the surrounding temporal context. Finally, we illustrate the use of 3D timelines with a use-case on morphogenetic analysis in which domain experts in cell imaging were involved in the design and evaluation process.

Figure 5
Figure 5: Illustration a 3D timeline with a helicoid representation for the exploration of a time-varying embryo imaging dataset. The viridis colormap encodes the elongation ratio of cells involved in morphogenetic movements.

8.1.4 Control Your Virtual Agent in its Daily-activities for Long Periods

Participants: Valérie Gouranton [contact], Lysa Gramoli, Bruno Arnaldi.

Simulating human behavior through virtual agents is a key feature to improve the credibility of virtual environments (VE). For many use cases, such as daily activities data generation, having a good ratio between the agent's control and autonomy is required to impose specific activities while letting the agent be autonomous. In this work 41, we propose a model allowing a user to configure the level of the agent's decision making autonomy according to their requirements. Our model, based on a BDI architecture, combines control constraints given by the user, an internal model simulating human daily needs for autonomy, and a scheduling process to create an activity plan considering these two parts. Using a calendar, the activities that must be performed in the required time can be given by the user. In addition, the user can indicate whether interruptions can happen during the activity calendar to apply an effect induced by the internal model. The plan generated by our model can be executed in the VE by an animated agent in real-time. To show that our model manages well the ratio between control and autonomy, we use a 3D home environment to compare the results with the input parameters (see Figure 6).

Figure 6
Figure 6: Agent performing activities in a 3D smart home environment.

This work has been done in collaboration with Orange Labs (Jeremy Lacoche and Anthony Foulonneau).

8.1.5 A BIM-based model to study wayfinding signage using virtual reality

Participants: Valérie Gouranton [contact], Vincent Goupil, Bruno Arnaldi.

Wayfinding signage is essential in a large building to find one’s way. Unfortunately, there are no methodologies and standards for designing signage. A good sign system therefore depends on the experience of the signage company. Getting lost in public infrastructures might be disorienting or cause anxiety. Designing an efficient signage system is challenging as the building needs to communicate a lot of information in a minimum of space. In this work 40, we propose a model to study wayfinding signage based on BIM models and the BIM open library, which allows the integration of signage design into a BIM model to perform analyses and comparisons. The study of signage is based on the user’s perception, and virtual reality is a tool that best approximates this today (see Figure 7). Our model helps to perform signage analysis in building design and to compare objectively the wayfinding signage in a BIM model using virtual reality.

Figure 7
Figure 7: Virtual reality navigation with a smartphone indicating the lockers.

This work was done in collaboration with Vinci Construction (Anne-Solene Michaud) and CHU Rennes (Jean-Yves Gauvrit).

8.1.6 A Systematic Review of Navigation Assistance Systems for People with Dementia

Participants: Léa Pillete [contact], Guillaume Moreau, Jean-Marie Normand, Anatole Lécuyer, Melanie Cogné.

Technological developments provide solutions to alleviate the tremendous impact on the health and autonomy due to the impact of dementia on navigation abilities. We systematically reviewed 25 the literature on devices tested to provide assistance to people with dementia during indoor, outdoor and virtual navigation (PROSPERO ID number: 215585). Medline and Scopus databases were searched from inception. Our aim was to summarize the results from the literature to guide future developments. Twenty-three articles were included in our study. Three types of information were extracted from these studies. First, the types of navigation advice the devices provided were assessed through: (i) the sensorial modality of presentation, e.g., visual and tactile stimuli, (ii) the navigation content, e.g., landmarks, and (iii) the timing of presentation, e.g., systematically at intersections. Second, we analyzed the technology that the devices were based on, e.g., smartphone. Third, the experimental methodology used to assess the devices and the navigation outcome was evaluated. We report and discuss the results from the literature based on these three main characteristics. Finally, based on these considerations, recommendations are drawn, challenges are identified and potential solutions are suggested. Augmented reality-based devices, intelligent tutoring systems and social support should particularly further be explored.

This work was done in collaboration with the CHU Rennes.

8.2 Avatars and Virtual Embodiment

8.2.1 Multi-sensory display of self-avatar's physiological state: virtual breathing and heart beating can increase sensation of effort in VR

Participants: Anatole Lécuyer [contact], Yann Moullec, Mélanie Cogné, Justine Saint-Aubert.

In this work 24 we explored the multi-sensory display of self-avatars’ physiological state in Virtual Reality (VR), as a means to enhance the connection between the users and their avatar (see Figure 8). Our approach consists in designing and combining a coherent set of visual, auditory and haptic cues to represent the avatar’s cardiac and respiratory activity. These sensory cues are modulated depending on the avatar’s simulated physical exertion. We notably introduce a novel haptic technique to represent respiratory activity using a compression belt simulating abdominal movements that occur during a breathing cycle. A series of experiments was conducted to evaluate the influence of our multi-sensory rendering techniques on various aspects of the VR user experience, including the sense of virtual embodiment and the sensation of effort during a walking simulation. A first study (N=30) that focused on displaying cardiac activity showed that combining sensory modalities significantly enhances the sensation of effort. A second study (N=20) that focused on respiratory activity showed that combining sensory modalities significantly enhances the sensation of effort as well as two sub-components of the sense of embodiment. Interestingly, the user’s actual breathing tended to synchronize with the simulated breathing, especially with the multi-sensory and haptic displays. A third study (N=18) that focused on the combination of cardiac and respiratory activity showed that combining both rendering techniques significantly enhances the sensation of effort. Taken together, our results promote the use of our novel breathing display technique and multi-sensory rendering of physiological parameters in VR applications where effort sensations are prominent, such as for rehabilitation, sport training, or exergames.

Figure 8
Figure 8: We propose a novel approach to increase the connection with a self-avatar in virtual reality (A), by displaying its physiological state and physical exertion. It is based on a multi-sensory setup (B) involving visual, auditory and haptic displays. It includes visual effects such as a periphery overlay (C) simulating heart beating; or haptic stimulation delivered with a piezoelectric actuator (D) and a novel compression belt (E) which exerts pressure on the abdomen to simulate a virtual breathing.

This work was done in collaboration with CHU Rennes.

8.2.2 Influence of Vibrations on Impression of Walking and Embodiment With First- and Third-Person Avatar

Participants: Anatole Lécuyer [contact], Justine Saint-Aubert, Mélanie Cogné.

Previous studies have explored ways to increase the impression of walking of static VR users making use of vibrotactile underfoot feedback (see Figure 9). In this work 27, we investigated the influence of vibratory feedback when a static user embodies an avatar from a first-or third-person perspective: (i) The benefit of tactile feedback compared to simulation without tactile feedback is evaluated. (ii) We also examined the interest of using phase-based rendering simulating gait phases over other tactile rendering. To this end, we describe a user study (n = 44) designed to evaluate the influence of different tactile renderings on the impression of walking of static VR users. Participants observed a walking avatar from either a first- or a third-person perspective and compared 3 conditions: without tactile rendering, with constant tactile rendering reproducing simple contact information, with tactile rendering based on gait phases. The results show that, overall, constant and phase-based rendering both improve the impression of walking in the first- and third-person perspective. However, such tactile rendering decreases the impression of walking of some participants when the avatar is observed from a first-person perspective. Interestingly, results also show that phase-based rendering does not improve the impression of walking from a first-person perspective compared to the constant rendering, but it does improve the impression of walking from a third-person perspective. Our results then support the use of both tactile rendering from a first-person perspective and the use of phase-based rendering from a third-person perspective.

Figure 9
Figure 9: Experimental apparatus. Top: Virtual scene with self avatar walking. Bottom: Participant seated in a chair with feet placed above vibrotactile actuators.

This work was done in collaboration with CHU Rennes.

8.2.3 Influence of user posture and virtual exercise on impression of locomotion during VR observation

Participants: Anatole Lécuyer [contact], Justine Saint-Aubert, Mélanie Cogné, Mélanie Cogné.

A seated user watching his avatar walking in Virtual Reality (VR) may have an impression of walking. In this work 26, we show that such an impression can be extended to other postures and other locomotion exercises. We present two user studies in which participants wore a VR headset and observed a first-person avatar performing virtual exercises. In the first experiment, the avatar walked and the participants (n=36) tested the simulation in 3 different postures (standing, sitting and Fowler's posture). In the second experiment, other participants (n=18) were sitting and observed the avatar walking, jogging or stepping over virtual obstacles. We evaluated the impression of locomotion by measuring the impression of walking (respectively jogging or stepping) and embodiment in both experiments. The results show that participants had the impression of locomotion in either sitting, standing and Fowler’s posture. However, Fowler’s posture significantly decreased both the level of embodiment and the impression of locomotion. The sitting posture seems to decrease the sense of agency compared to standing posture. Results also show that the majority of the participants experienced an impression of locomotion during the virtual walking, jogging, and stepping exercises. The embodiment was not influenced by the type of virtual exercise. Overall, our results suggest that an impression of locomotion can be elicited in different users' postures and during different virtual locomotion exercises. They provide valuable insight for numerous VR applications in which the user observes a self-avatar moving, such as video games, gait rehabilitation, training, etc.

8.2.4 What Can I Do There? Controlling AR Self-Avatars to Better Perceive Affordances of the Real World

Participants: Anatole Lécuyer [contact], Adélaïde Genay.

In this work 39 we explored a new usage of Augmented Reality (AR) to extend perception and interaction within physical areas ahead of ourselves. To do so, we proposed to detach ourselves from our physical position by creating a controllable "digital copy" of our body that can be used to navigate in local space from a third-person perspective (see Figure 10). With such a viewpoint, we aim to improve our mental representation of distant space and understanding of action possibilities (called affordances), without requiring us to physically enter this space. Our approach relies on AR to virtually integrate the user’s body in remote areas in the form of an avatar. We discuss concrete application scenarios and propose several techniques to manipulate avatars in the third person as a part of a larger conceptual framework. Finally, through a user study employing one of the proposed techniques (puppeteering), we evaluate the validity of using third-person embodiment to extend our perception of the real world to areas outside of our proximal zone. We found that this approach succeeded in enhancing the user's accuracy and confidence when estimating their action capabilities at distant locations.

Figure 10

We propose a novel approach to increase the connection with a self-avatar in virtual reality (A), by displaying its physiological state and physical exertion. It is based on a multi-sensory setup (B) involving visual, auditory and haptic displays. It includes viControl of self-avatars visualized through an Augmented Reality headset to better perceive interactions and affordances in the physical surroundings. Left Testing fire exit paths with a gamepad. Center Planning and testing a route before climbing by controlling the avatar's limbs with gestures. Right Evaluating possible actions on a distant step stool with body-tracking mappig.

Figure 10: We propose a novel approach to increase the connection with a self-avatar in virtual reality (A), by displaying its physiological state and physical exertion. It is based on a multi-sensory setup (B) involving visual, auditory and haptic displays. It includes viControl of self-avatars visualized through an Augmented Reality headset to better perceive interactions and affordances in the physical surroundings. Left: Testing fire exit paths with a gamepad. Center: Planning and testing a route before climbing by controlling the avatar's limbs with gestures. Right: Evaluating possible actions on a distant step stool with body-tracking mappig.

This work was done in collaboration with Inria POTIOC team.

8.2.5 Investigating Dual Body Representations During Anisomorphic 3D Manipulation

Participants: Ferran Argelaguet [contact], Anatole Lécuyer.

In virtual reality, several manipulation techniques distort users’ motions, for example to reach remote objects or increase precision. These techniques can become problematic when used with avatars, as they create a mismatch between the real performed action and the corresponding displayed action, which can negatively impact the sense of embodiment. In this work 17, we propose to use a dual representation during anisomorphic interaction (see Figure 11). A co-located representation serves as a spatial reference and reproduces the exact users’ motion, while an interactive representation is used for distorted interaction. We conducted two experiments, investigating the use of dual representations with amplified motion (with the Go-Go technique) and decreased motion (with the PRISM technique). Two visual appearances for the interactive representation and the co-located one were explored. This exploratory study investigating dual representations in this context showed that it was possible to feel a global sense of embodiment towards such representations, and that they had no impact on performance. While interacting seemed more important than showing exact movements for agency during out-of-reach manipulation, people felt more in control of the realistic arm during close manipulation. We also found that people globally preferred having a single representation, but opinions diverge especially for the Go-Go technique.

Figure 11
Figure 11: Illustration of two types of dual body representation studied in this paper. On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user’s position. On the right image, a ghost representation provides feedback with respect to the user’s real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion.

This work was done in collaboration with the Inria Mimetic Team.

8.2.6 Studying the Role of Self and External Touch in the Appropriation of Dysmorphic Hands

Participants: Jean-Marie Normand [contact], Antonin Cheymol, Rebecca Fribourg, Anatole Lécuyer, Ferran Argelaguet.

In Virtual Reality, self-touch (ST) stimulation is a promising method of sense of body ownership (SoBO) induction that does not require an external effector. However, its applicability to dysmorphic bodies has not been explored yet and remains uncertain due to the requirement to provide incongruent visuomotor sensations. In this work 31, we studied the effect of ST stimulation on dysmorphic hands via haptic retargeting, as compared to a classical external-touch (ET) stimulation, on the SoBO (see Figure 12). Our results indicate that ST can induce similar levels of dysmorphic SoBO than ET stimulation, but that some types of dysmorphism might decrease the ST stimulation accuracy due to the nature of the re-targeting that they induce.

Figure 12
Figure 12: We assessed the sense of body ownership towards different dysmorphic hands, induced by two visuotactile stimulation techniques (right part of the figure, from left to right): self-touch (users inducing themselves the tactile stimulation) and external-touch (experimenter inducing it). The left part of this figure shows all the hand appearances that we studied, from left to right: anthropomorphic, longer-finger and block, hatched lines show matching tactile stimulation areas.

This work was done in collaboration with the University of Tokyo.

8.2.7 Manipulating the Sense of Embodiment in Virtual Reality: a study of the interactions between the senses of agency, ownership and self-location

Participants: Jean-Marie Normand [contact], Martin Guy, Guillaume Moreau.

In Virtual Reality (VR), the Sense of Embodiment (SoE) corresponds to the feeling of controlling and owning a virtual body, usually referred to as an avatar. The SoE is generally divided into three components: the Sense of Agency (SoA) which characterises the level of control of the user over the avatar, the Sense of Self-Location (SoSL) which is the feeling to be located in the avatar and the Sense of Body-Ownership (SoBO) that represents the attribution of the virtual body to the user. While previous studies showed that the SoE can be manipulated by disturbing either the SoA, the SoBO or the SoSL, the relationships and interactions between these three components still remain unclear. In this work 33, we aim at extending the understanding of the SoE and the interactions between its components by 1) experimentally manipulating them in VR via a biased visual feedback, and 2) understanding if each sub-component can be selectively altered or not. To do so, we designed a within-subject experiment (see Figure 13) where 47 right-handed participants had to perform movements of their right-hand under different experimental conditions impacting the sub-components of embodiment: the SoA was modified by impacting the control of the avatar with visual biased feedback, the SoBO was altered by modifying the realism of the virtual right hand (anthropomorphic cartoon hand or non-anthropomorphic stick “fingers”) and the SoSL was controlled via the user's point of view (first or third person). After each trial, participants rated their level of agency, ownership and self-location on a 7-item Likert scale. Results' analysis revealed that the three components could not be selectively altered in this experiment. Nevertheless, these preliminary results pave the way to further studies.

Figure 13
Figure 13: This experiment assessed the sense of embodiment during right-hand movements with different levels of control, different levels of realism of the avatar, and different levels of visuospatial perspective. Top (from left to right): anthropomorphic cartoon hand (Anthropomorphic-hand), stick-fingers hand (Stick-fingers), inverted 2nd and 4th fingers (Manipulated). All images are in 1PP. Bottom (from left to right): third-person perspective in peripersonal space (3PP-PP), third-person perspective in extra-personal space (3PP-EP).

This work was done in collaboration with the University of Bordeaux.

8.2.8 Comparing Experimental Designs for Virtual Embodiment Studies

Participants: Anatole Lécuyer [contact], Ferran Argelaguet, Grégroire Richard.

When designing virtual embodiment studies, one of the key choices is the nature of the experimental factors, either between-subjects or within-subjects. However, it is well known that each design has advantages and disadvantages in terms of statistical power, sample size requirements and confounding factors. In this work 36, we reported a within-subjects experiment (N=92) comparing self-reported embodiment scores under a visuomotor task with two conditions: synchronous motions and asynchronous motions with a latency of 300ms. With the gathered data, using a Monte-Carlo method, we created numerous simulations of within- and between-subjects experiments by selecting subsets of the data. In particular, we explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. For the between-subjects simulations, only the first condition for each user was considered to create the simulations. The results showed that while the replicability of the results increased as the number of participants increased for the within-subjects simulations, no matter the number of participants, between-subjects simulations were not able to replicate the initial results. We discuss the potential reasons that could have led to this surprising result and potential methodological practices to mitigate them.

This work was done in collaboration with the Inria team Loki.

8.3 Haptic Feedback

8.3.1 Use of Electrotactile Feedback for Finger-Based Interactions in Virtual Reality

Participants: Ferran Argelaguet [contact], Sebastian Saint-Vizcay, Panagiotis Kourtesis.

The use of electrotactile feedback in Virtual Reality (VR) has shown promising results for providing tactile information and sensations. While progress has been made to provide custom electrotactile feedback for specific interaction tasks, it remains unclear which modulations and rendering algorithms are preferred in rich interaction scenarios.

First, we proposed a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions 19. We discussed the different electrotactile systems according to the type of application. We also discussed over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggested future directions. Moreover, we also proposed a unified tactile rendering architecture and explore the most promising modulations to render finger interactions in VR 37, 38. Based on a literature review, we designed six electrotactile stimulation patterns/effects (EFXs) striving to render different tactile sensations. In a user study (N=18), we assessed the six EFXs in three diverse finger imnteractions: 1) tapping on a virtual object; 2) pressing down a virtual button; 3) sliding along a virtual surface (see Figure 14). Results showed that the preference for certain EFXs depends on the task at hand. No significant preference was detected for tapping (short and quick contact); EFXs that render dynamic intensities or dynamic spatio-temporal patterns were preferred for pressing (continuous dynamic force); EFXs that render moving sensations were preferred for sliding (surface exploration). The results showed the importance of the coherence between the modulation and the interaction being performed and the study proved the versatility of electrotactile feedback and its efficiency in rendering different haptic information and sensations. Finally, we examined the association between performance and perception and the potential effects that tactile feedback modalities (electrotactile, vibrotactile) could generate 20. In a user study (N=24) participants performed a standardized Fitts's law target acquisition task by using three feedback modalities: visual, visuo-electrotactile, and visuo-vibrotactile. The users completed 3 target sizes × 2 distances × 3 feedback modalities = 18 trials. The size perception, distance perception, and (movement) time perception were assessed at the end of each trial. Performance-wise, the results showed that electrotactile feedback facilitates a significantly better accuracy compared to vibrotactile and visual feedback, while vibrotactile provided the worst accuracy. Electrotactile and visual feedback enabled a comparable reaction time, while the vibrotactile offered a substantially slower reaction time than visual feedback.

Figure 14
Figure 14: Figures (a)-(c) illustrate Tree evaluated interaction tasks: tapping, pressing and sliding. Figure (d) illustrates the electrotactile equipment used in this work.

This work was done in collaboration with the Inria RAINBOW team.

8.3.2 Watch out for the Robot! Designing Visual Feedback Safety Techniques When Interacting With Encountered-Type Haptic Displays.

Participants: Anatole Lécuyer [contact], Ferran Argelaguet.

Encountered-Type Haptic Displays (ETHDs) enable users to touch virtual surfaces by using robotic actuators capable of co-locating real and virtual surfaces without encumbering users with actuators. One of the main challenges of ETHDs is to ensure that the robotic actuators do not interfere with the VR experience by avoiding unexpected collisions with users. This work 22 presented a design space for safety techniques using visual feedback to make users aware of the robot’s state and thus reduce unintended potential collisions. The blocks that compose this design space focus on what and when the feedback is displayed and how it protects the user (see Figure 15). Using this design space, a set of 18 techniques was developed exploring variations of the three dimensions. An evaluation questionnaire focusing on immersion and perceived safety was designed and evaluated by a group of experts, which was used to provide a first assessment of the proposed techniques.

Figure 15
Figure 15: Several examples of the explored design space features. This figure illustrates the features of the design space by showcasing the visual feedback used in the implemented in some of the safety techniques.

This work was done in collaboration with the Inria Loki team.

8.3.3 The “Kinesthetic HMD”: Inducing Self-Motion Sensations in Immersive Virtual Reality With Head-Based Force Feedback

Participants: Anatole Lécuyer [contact].

The sensation of self-motion is essential in many virtual reality applications, from entertainment to training, such as flying and driving simulators. If the common approach used in amusement parks is to actuate the seats with cumbersome systems, multisensory integration can also be leveraged to get rich effects from lightweight solutions. In this work 16, we introduced a novel approach called the “Kinesthetic HMD”: actuating a head-mounted display with force feedback in order to provide sensations of self-motion (see Figure 16). We discuss its design considerations and demonstrate an augmented flight simulator use case with a proof-of-concept prototype. We conducted a user study assessing our approach’s ability to enhance self-motion sensations. Taken together, our results show that our Kinesthetic HMD provides significantly stronger and more egocentric sensations than a visual-only self-motion experience. Thus, by providing congruent vestibular and proprioceptive cues related to balance and self-motion, the Kinesthetic HMD represents a promising approach for a variety of virtual reality applications in which motion sensations are prominent.

Figure 16
Figure 16: The Kinesthetic HMD: head-based force feedback enhancing self-motion sensations.

8.3.4 Haptic Rattle: Multi-Modal Rendering of Virtual Objects Inside a Hollow Container

Participants: Valérie Gouranton [contact], Ronan Gaugne [contact], Emilie Hummel, Anatole Lécuyer.

The sense of touch plays a strong role in the perception of the properties and characteristics of hollow objects. The action of shaking a hollow container to get an insight of its content is a natural and common interaction. In 35, we present a multi-modal rendering approach for the simulation of virtual moving objects inside a hollow container, based on the combination of haptic and audio cues generated by voice-coils actuators and high-fidelity headphones, respectively. We conducted a user study. Thirty participants were asked to interact with a target cylindrical hollow object and estimate the number of moving objects inside (see Figure 17), relying on haptic feedback only, audio feedback only, or a combination of both. Results indicate that the combination of various senses is important in the perception of the content of a container.

Figure 17
Figure 17: Left: Our cylindrical prop, housing an IMU and two voice-coil actuators Right: Experimental setup for the manipulation of the device.

This work was done in collaboration with Inrap, and the Rainbow team.

8.3.5 “Kapow”!: Studying the Design of Visual Feedback for Representing Contacts in Extended Reality

Participants: Anatole Lécuyer [contact], Julien Cauquis, Géry Casiez, Jean-Marie Normand.

In absence of haptic feedback, the perception of contact with virtual objects can rapidly become a problem in extended reality (XR) applications. XR developers often rely on visual feedback to inform the user and display contact information. However, as for today, there is no clear path on how to design and assess such visual techniques. In this work 29, we propose a design space for the creation of visual feedback techniques meant to represent contact with virtual surfaces in XR. Based on this design space, we conceived a set of various visual techniques, including novel approaches based on onomatopoeia and inspired by cartoons, or visual effects based on physical phenomena (see Figure 18). Then, we conducted an online preliminary user study with 60 participants, consisting in assessing 6 visual feedback techniques in terms of user experience. We could notably assess, for the first time, the potential influence of the interaction context by comparing the participants' answers in two different scenarios: industrial versus entertainment conditions. Taken together, our design space and initial results could inspire XR developers for a wide range of applications in which the augmentation of contact seems prominent, such as for vocational training, industrial assembly/maintenance, surgical simulation, videogames, etc.

Figure 18
Figure 18: Our set of visual feedback techniques meant to represent contact in eXtended Reality. These techniques were conceived using the design space presented in this paper and implemented in a Microsoft HoloLens 2 (left). The techniques are the following: A) Kapow, B) Lightning, C) Color Change, D) Arrow, E) Disk, F) Deformation, G) Spark3D, H) Hole, I) Ripple, J) Crack, K) Poof, L) Shaking, M) Bubble3D, and N) Snowflakes.

8.4 Brain Computer Interfaces

8.4.1 Toward an Adapted Neurofeedback for Post-stroke Motor Rehabilitation: State of the Art and Perspectives

Participants: Anatole Lécuyer [contact], Salomé Lefranc, Gabriela Herrera.

Stroke is a severe health issue, and motor recovery after stroke remains an important challenge in the rehabilitation field. Neurofeedback (NFB), as part of a brain–computer interface, is a technique for modulating brain activity using on-line feedback that has proved to be useful in motor rehabilitation for the chronic stroke population in addition to traditional therapies. Nevertheless, its use and applications in the field still leave unresolved questions. The brain pathophysiological mechanisms after stroke remain partly unknown, and the possibilities for intervention on these mechanisms to promote cerebral plasticity are limited in clinical practice. In NFB motor rehabilitation, the aim is to adapt the therapy to the patient’s clinical context using brain imaging, considering the time after stroke, the localization of brain lesions, and their clinical impact, while taking into account currently used biomarkers and technical limitations. These modern techniques also allow a better understanding of the physiopathology and neuroplasticity of the brain after stroke. In this work 21, we conducted a narrative literature review of studies using NFB for post-stroke motor rehabilitation. The main goal was to decompose all the elements that can be modified in NFB therapies, which can lead to their adaptation according to the patient's context and according to the current technological limits. Adaptation and individualization of care could derive from this analysis to better meet the patients' needs. We focused on and highlighted the various clinical and technological components considering the most recent experiments. The second goal was to propose general recommendations and enhance the limits and perspectives to improve our general knowledge in the field and allow clinical applications. We highlighted the multidisciplinary approach of this work by combining engineering abilities and medical experience. Engineering development is essential for the available technological tools and aims to increase neuroscience knowledge in the NFB topic. This technological development was born out of the real clinical need to provide complementary therapeutic solutions to a public health problem, considering the actual clinical context of the post-stroke patient and the practical limits resulting from it.

This work was done in collaboration with CHU Rennes, LORIA, PERSEUS and EMPENN teams.

8.4.2 Grasp-IT Xmod: A Multisensory Brain-Computer Interface for Post-Stroke Motor Rehabilitation

Participants: Anatole Lécuyer [contact], Gabriela Herrera.

This work presented Grasp-IT Xmod 34, a game-based brain-computer interface that aims to improve post-stroke upper limb motor rehabilitation (see Figure 19). Indeed, stroke survivors need extensive rehabilitation work including kinesthetic motor imagery (KMI) to stimulate neurons and recover lost motor function. But KMI is intangible without feedback. After recording the electrical activity of the brain with an electroencephalographic system during kinesthetic motor imagery (KMI), multisensory feedback is given based on the quality of the KMI. This feedback is composed of a visual environment and, in a more original way, of a vibrotactile device placed on the forearm. In addition to an affording and motivating situation, the vibrotactile feedback, synchronous to the visual feedback, aims at encouraging the incorporation of the imagined movement in the user in order to improve his IMK performance and consequently the rehabilitation process.

Figure 19

The Grasp-IT Xmod Brain-Computer Interface. Right) 1a) Electroencephalography acquisition system with 1b) referring to the electrode cap. 2) Screen used for the visual feedback. 3) Vibrotactile device used for haptic feedback. 4) Experimenter computer to control the BCI and analyze the EEG signal. Left) Details of the vibrotactile device with three vibration motors (VM).

Figure 19: The Grasp-IT Xmod Brain-Computer Interface. Right) 1a) Electroencephalography acquisition system with 1b) referring to the electrode cap. 2) Screen used for the visual feedback. 3) Vibrotactile device used for haptic feedback. 4) Experimenter computer to control the BCI and analyze the EEG signal. Left) Details of the vibrotactile device with three vibration motors (VM).

This work was done in collaboration with UMR LORIA and PERSEUS team (University of Lorraine).

8.4.3 Designing Functional Prototypes Combining BCI and AR for Home Automation

Participants: Anatole Lécuyer [contact].

In this technology report 43 we presented how to design functional prototypes of smart home systems, based on Augmented Reality (AR) and Brain-Computer Interfaces (BCI). A prototype was designed and integrated into a home automation platform, aiming to illustrate the potential of combining EEG-based interaction with Augmented Reality interfaces for operating home appliances. Our proposed solution enables users to interact with different types of appliances from “on-off”-based objects like lamps, to multiple command objects like televisions (see Figure 20). This technology report presents the different steps of the design and implementation of the system, and proposes general guidelines regarding the future development of such solutions. These guidelines start with the description of the functional and technical specifications that should be met, before the introduction of a generic and modular software architecture that can be maintained and adapted for different types of BCI, AR displays and connected objects. Overall this technology report paves the way to the development of a new generation of smart home systems, exploiting brain activity and Augmented Reality for direct interaction with multiple home appliances.

Figure 20

Illustration of the implemented AR interface. The default view of the system (Left) represents the different objects in the field of view, with associated flickering icons. The fan and the light could be switched ON or OFF with a single command. The interaction with television was conducted through a hierarchical menu. After selecting the TV, the possible commands to issue appeared on the interface (Right).

Figure 20: Illustration of the implemented AR interface. The default view of the system (Left) represents the different objects in the field of view, with associated flickering icons. The fan and the light could be switched ON or OFF with a single command. The interaction with the television was conducted through a hierarchical menu. After selecting the TV, the possible commands to issue appeared on the interface (Right).

This work was done in collaboration with Orange.

8.5 Art and Cultural Heritage

8.5.1 eXtended Reality for Cultural Heritage

Participants: Ronan Gaugne [contact], Valérie Gouranton [contact], Jean-Marie Normand, Flavien Lécuyer.

3D data production techniques, although increasingly used by archaeologists and Cultural Heritage practitioners, are most often limited to the production of 2D or 3D images. Beyond the modes of visualization of these data, it is necessary to wonder about their possible interactions and uses. Virtual Reality, Augmented Reality, and Mixed Reality, collectively known as eXtended Reality, XR, or Cross Reality, make it possible to envisage natural and/or complex interactions with 3D digital environments. These are physical, tangible, or haptic (i.e., with effort feedback) interactions, which can be understood through different modalities or metaphors, associated with procedures or gestures. These interactions can be integrated by archaeologists in the "operating chain" of an operation (from the ground to the study phase), or be part of a functional reconstitution of the procedures and gestures of the past, in order to help understand an object, a site, or even a human activity. The different case studies presented in 45 result from collaborations between archaeologists, historians, and computer scientists. They illustrate different interactions in 3D environments, whether they are operational (support for excavation processes)(see Figure 21) or functional (archaeological objects, human activities of the past).

Figure 21
Figure 21: Operational Interaction in XR for Archaeological process.

This work was done in collaboration with Inrap, and UMR Trajectoires.

8.5.2 Use of Different Digitization Methods for the Analysis of Cut Marks on the Oldest Bone Found in Brittany (France)

Participants: Valérie Gouranton [contact], Ronan Gaugne [contact].

Archaeological 3D digitization of skeletal elements is an essential aspect of the discipline. Objectives are various: archiving of data (especially before destructive sampling for biomolecular studies for example), study or for pedagogical purposes to allow their manipulation. As techniques are rapidly evolving, the question that arises is the use of appropriate methods to answer the different questions and guarantee sufficient quality of information. The combined use of different 3D technologies for the study of a single Mesolithic bone fragment from Brittany (France), see Figure 22, Left, proposed in 15 was an opportunity to compare different 3D digitization methods. This oldest human bone of Brittany, a clavicle constituted of two pieces, was dug up from the mesolithic shell midden of Beg-er-Vil in Quiberon and dated from ca. 8200 to 8000 years BP. They are bound to post-mortem processing, realized on fresh bone in order to remove the integuments, which it is necessary to better qualify. The clavicle was studied through a process that combines advanced 3D image acquisition, 3D processing, and 3D printing with the goal to provide relevant support for the experts involved in the work. The bones were first studied with a metallographic microscopy, scanned with a CT scan, and digitized with photogrammetry in order to get a high quality textured model. The CT scan appeared to be insufficient for a detailed analysis; the study was thus completed with a μ-CT providing a very accurate 3D model of the bone. Several 3D-printed copies of the collarbone were produced, using different scales, in order to support knowledge sharing between the experts involved in the study, see Figure 22, Right. The 3D models generated from μCT and photogrammetry were combined to provide an accurate and detailed 3D model. This model was used to study desquamation and the different cut marks, including their angle of attack. These cut marks were also studied with traditional binoculars and digital microscopy. This last technique allowed characterizing their type, revealing a probable meat cutting process with a flint tool. This work of crossed analyses allows us to document a fundamental patrimonial piece, and to ensure its preservation. Copies are also available for the regional museums.

Figure 22
Figure 22: Clavicle study. Top: The two fragments of the clavicle. Bottom: Annotated 3D printings of the clavicle.

This work was done in collaboration with Inrap, UMR CReAAH, and UMR ArchAm.

8.5.3 Sport heritage in VR: Real Tennis case study

Participants: Ronan Gaugne [contact], Valérie Gouranton [contact].

Traditional Sports and Games (TSG) are as varied as human cultures. Preserving knowledge of these practices is essential as they are an expression of intangible cultural heritage as emphasized by UNESCO in 1989. With the increasing development of virtual reconstructions in the domain of Cultural Heritage, and thank to advances in the production and 3D animation of virtual humans, interactive simulations and experiences of these activities have emerged to preserve this intangible heritage. In 18, we proposed a methodological approach to design an immersive reconstitution of a TSG in Virtual Reality, with a formalization of the elements involved in such a reconstitution and we illustrated this approach with the example of real tennis (Jeu de Paume or Courte Paume in French) (see Figure 23). As a result, we presented first elements of evaluation of the resulting VR application, including performance tests, a preliminary pilot study and the interview of high-ranked players. Real tennis is a racket sport that has been played for centuries and is considered the ancestor of tennis. It was a very popular sport in Europe during the Renaissance period, practiced by every layer of the society. It is still practiced today in few courts in world, especially in France, United Kingdom, Australia and USA. It has been listed in the Inventory of Intangible Cultural Heritage in France since 2012.

Figure 23
Figure 23: Virtual real tennis in the Immersia facility.

This work was done in collaboration with Inrap, and UMR Trajectoires. This work was done in the context of the M2 internship of Pierre Duc-Martin (INSA Rennes).

8.5.4 Could you relax in an artistic co-creative virtual reality experience?

Participants: Ronan Gaugne [contact], Valérie Gouranton [contact], Julien Lomet.

The work presented in 42 contributes to the design and study of artistic collaborative virtual environments through the presentation of immersive and interactive digital artwork installation and the evaluation of the impact of the experience on visitor's emotional state. The experience is centered on a dance performance, involves collaborative spectators who are engaged to the experience through full-body movements (see Figure 24), and is structured in three times, a time of relaxation and discovery of the universe, a time of co-creation and a time of co-active contemplation.

The collaborative artwork “Creative Harmony”, was designed within a multidisciplinary team of artists, researchers and computer scientists from different laboratories. The aesthetic of the artistic environment is inspired by the German Romantism painting from 19th century. In order to foster co-presence, each participant of the experience is associated to an avatar that aims to represent both its body and movements. The music is an original composition designed to develop a peaceful and meditative ambiance to the universe of “Creative Harmony”. The evaluation of the impact on visitor's mood is based on “Brief Mood Introspection Scale” (BMIS), a standard tool widely used in psychological and medical context. We also present an assessment of the experience through the analysis of questionnaires filled by the visitors. We observed a positive increase in the Positive-Tired indicator and a decrease in the Negative-Relaxed indicator, demonstrating the relaxing capabilities of the immersive virtual environment.

Figure 24
Figure 24: Top: Creative Harmony live performance in Immersia. Bottom: Connected interactive spectators with HMDs.

This work was done in collaboration with the team Arts: pratiques et poétiques in University Rennes 2 (Joel Laurent), and UMR Inrev in University Paris 8 (Cedric Plessiet).

9 Bilateral contracts and grants with industry

9.1 Grants with Industry

InterDigital

Participants: Nicolas Olivier, Ferran Argelaguet [contact].

This grant started in February 2019. It supports Nicolas's Olivier CIFRE PhD program with InterDigital company on “Avatar Stylization”. This PhD is co-supervised with the MimeTIC team. The PhD was successfully defended on March 2022.

Orange Labs

Participants: Lysa Gramoli, Bruno Arnaldi, Valérie Gouranton [contact].

This grant started in October 2020. It supports Lysa Gramoli's PhD program with Orange Labs company on “Simulation of autonomous agents in connected virtual environments”.

Sogea Bretagne

Participants: Vincent Goupil, Bruno Arnaldi, Valérie Gouranton [contact].

This grant started in October 2020. It supports Vincent Goupil's CIFRE PhD program with Sogea Bretagne company on “Hospital 2.0: Generation of Virtual Reality Applications by BIM Extraction”.

10 Partnerships and cooperations

10.1 International initiatives

10.1.1 Participation in other International Programs

SURRÉARISME

Participants: Jean-Marie Normand, Guillaume Moreau.

  • Titre:
    SURRÉARISME: Exploring Perceptual Realism in Mixed Reality using Novel Near-Eye Display Technologies
  • Partners:
    • École Centrale de Nantes, France
    • IMT Atlantique Brest, France
    • The University of Tokyo, Japon
  • Program:
    PHC SAKURA
  • Date/Duration:
    February 2022 - December 2022
  • Additionnal info/keywords:
    Mixed Reality; Perception

10.1.2 H2020 projects

H-Reality

Participants: Anatole Lécuyer [contact].

H-Reality project on cordis.europa.eu

  • Title:
    Mixed Haptic Feedback for Mid-Air Interactions in Virtual and Augmented Realities
  • Duration:
    From October 1, 2018 to March 31, 2022
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (Inria), France
    • ULTRALEAP LIMITED (ULTRAHAPTICS), United Kingdom
    • ACTRONIKA (ACA), France
    • INSTITUT NATIONAL DES SCIENCES APPLIQUEES DE RENNES (INSA Rennes), France
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • TECHNISCHE UNIVERSITEIT DELFT (TU Delft), Netherlands
  • Inria contact:
    Claudio Pacchierotti
  • Coordinator:
    THE UNIVERSITY OF BIRMINGHAM (UoB), United Kingdom
  • Summary:
    “Touch comes before sight, before speech. It is the first language and the last, and it always tells the truth” (Margaret Atwood), yet digital content today remains focused on visual and auditory stimulation. Even in the realm of VR and AR, sight and sound remain paramount. In contrast, methods for delivering haptic (sense of touch) feedback in commercial media are significantly less advanced than graphical and auditory feedback. Yet without a sense of touch, experiences ultimately feel hollow, virtual realities feel false, and Human-Computer Interfaces become unintuitive. Our vision is to be the first to imbue virtual objects with a physical presence, providing a revolutionary, untethered, virtual-haptic reality: H-Reality. The ambition of H-Reality will be achieved by integrating the commercial pioneers of ultrasonic “non-contact” haptics, state-of-the-art vibrotactile actuators, novel mathematical and tribological modelling of the skin and mechanics of touch, and experts in the psychophysical rendering of sensation. The result will be a sensory experience where digital 3D shapes and textures are made manifest in real space via modulated, focused, ultrasound, ready for the untethered hand to feel, where next-generation wearable haptic rings provide directional vibrotactile stimulation, informing users of an object's dynamics, and where computational renderings of specific materials can be distinguished via their surface properties. The implications of this technology will be far-reaching. The computer touch-screen will be brought into the third dimension so that swipe gestures will be augmented with instinctive rotational gestures, allowing intuitive manipulation of 3D data sets and strolling about the desktop as a virtual landscape of icons, apps and files. H-Reality will transform online interactions; dangerous machinery will be operated virtually from the safety of the home, and surgeons will hone their skills on thin air.
GuestXR

Participants: Anatole Lécuyer [contact], Ferran Argelaguet, Marc Macé, Justine Saint-Aubert, Jeanne Hecquard.

GuestXR project on cordis.europa.eu

  • Title:
    GuestXR: A Machine Learning Agent for Social Harmony in eXtended Reality
  • Duration:
    From January 1, 2022 to December 31, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • UNIWERSYTET WARSZAWSKI (UNIWARSAW), Poland
    • VIRTUAL BODYWORKS SL (Virtual Bodyworks S.L.), Spain
    • UNIVERSITEIT MAASTRICHT, Netherlands
    • UNIVERSITAT DE BARCELONA (UB), Spain
    • REICHMAN UNIVERSITY (REICHMAN UNIVERSITY), Israel
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • G.TEC MEDICAL ENGINEERING GMBH (G.TEC MEDICAL ENGINEERING GMBH), Austria
  • Inria contact:
    Anatole LECUYER
  • Coordinator:
    FUNDACIO EURECAT (EURECAT), Spain
  • Summary:
    Immersive online social spaces will soon become ubiquitous. However, there is also a warning that we need to heed from social media. User content is the ‘lifeblood of social media’. However, it often stimulates antisocial interaction and abuse, ultimately posing a danger to vulnerable adults, teenagers, and children. In the VR space this is backed up by the experience of current virtual shared spaces. While they have many positive aspects, they have also become a space full of abuse. Our vision is to develop GuestXR, a socially interactive multisensory platform system that uses eXtended Reality (virtual and augmented reality) as the medium to bring people together for immersive, synchronous face-to-face interaction with positive social outcomes. The critical innovation is the intervention of artificial agents that learn over time to help the virtual social gathering realise its aims. This is an agent that we refer to as “The Guest” that exploits Machine Learning to learn how to facilitate the meeting towards specific outcomes. Underpinning this is neuroscience and social psychology research on group behaviour, which will deliver rules to Agent Based Models (ABM). The combination of AI with immersive systems (including haptics and immersive audio), virtual and augmented reality will be a hugely challenging research task, given the vagaries of social meetings and individual behaviour. Several proof of concept applications will be developed during the project, including a conflict resolution application in collaboration with the UN. A strong User Group made up of a diverse range of stakeholders from industry, academia, government and broader society will provide continuous feedback. An Open Call will be held to bring in artistic support and additional use cases from wider society. Significant work is dedicated to ethics “by design”, to identify problems and look eventually towards an appropriate regulatory framework for such socially interactive systems.
TACTILITY

Participants: Ferran Argelaguet [contact], Anatole Lécuyer, Panagiotis Kourtesis, Sebastian Vizcay.

TACTILITY project on cordis.europa.eu

  • Title:
    TACTIle feedback enriched virtual interaction through virtual realITY and beyond
  • Duration:
    From July 1, 2019 to September 30, 2022
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (Inria), France
    • UNIVERSITAT JAUME I DE CASTELLON (UJI), Spain
    • IMMERSION (IMM), France
    • TECNALIA SERBIA DOO BEOGRAD (TECSR), Serbia
    • MANUS MACHINAE BV (Manus Machinae B.V.), Netherlands
    • UNIVERSITAT DE VALENCIA (UVEG), Spain
    • INSTITUT NATIONAL DES SCIENCES APPLIQUEES DE RENNES (INSA RENNES), France
    • UNIVERSITA DEGLI STUDI DI GENOVA (UNIGE), Italy
    • SMARTEX SRL (SMARTEX), Italy
    • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS (CNRS), France
    • AALBORG UNIVERSITET (AAU), Denmark
  • Inria contact:
    Ferran Argelaguet
  • Coordinator:
    FUNDACION TECNALIA RESEARCH & INNOVATION (TECNALIA), Spain
  • Summary:
    TACTILITY is a multidisciplinary innovation and research action with the overall aim of including rich and meaningful tactile information into the novel interaction systems through technology for closed-loop tactile interaction with virtual environments. By mimicking the characteristics of the natural tactile feedback, it will substantially increase the quality of immersive VR experience used locally or remotely (tele-manipulation). The approach is based on transcutaneous electro-tactile stimulation delivered through electrical pulses with high resolution spatio-temporal distribution. To achieve it, significant development of technologies for transcutaneous stimulation, textile-based multi-pad electrodes and tactile sensation electronic skin, coupled with ground-breaking research of perception of elicited tactile sensations in VR, is needed. The key novelty is in the combination of: 1) the ground-breaking research of perception of electrotactile stimuli for the identification of the stimulation parameters and methods that evoke natural like tactile sensations, 2) the advanced hardware, that will integrate the novel high-resolution electrotactile stimulation system and state of the art artificial electronic skin patches with smart textile technologies and VR control devices in a wearable mobile system, and 3) the novel firmware, that handles real-time encoding and transmission of tactile information from virtual objects in VR, as well as from the distant tactile sensors (artificial skins) placed on robotic or human hands. Proposed research and innovation action would result in a next generation of interactive systems with higher quality experience for both local and remote (e.g., tele-manipulation) applications. Ultimately, TACTILITY will enable high fidelity experience through low-cost, user friendly, wearable and mobile technology.

10.1.3 Other european programs/initiatives

ADAPT - Interreg

Participants: Valérie Gouranton [contact], Bruno Arnaldi, Ronan Gaugne, Florian Nouviale, Alexandre Audinot.

  • Title:
    Assistive Devices for empowering disAbled People through robotic Technologies
  • Duration:
    01/2017 - 06/2022
  • Coordinator:
    ESIGELEC/IRSEEM Rouen
  • Partners:
    INSA Rennes - IRISA, LGCGM, IETR (France), Université de Picardie Jules Verne - MIS (France), Pôle Saint Hélier (France), CHU Rouen (France), Réseau Breizh PC (France), Ergovie (France), Pôle TES (France), University College of London - Aspire CREATE (UK), University of Kent (UK), East Kent Hospitals Univ NHS Found. Trust (UK), Health and Europe Centre (UK), Plymouth Hospitals NHS Trust (UK), Canterbury Christ Church University (UK), Kent Surrey Sussex Academic Health Science Network (UK), Cornwall Mobility Center (UK).
  • Inria contact:
    Valérie Gouranton
  • Summary:
    The ADAPT project aims to develop innovative assistive technologies in order to support the autonomy and to enhance the mobility of power wheelchair users with severe physical/cognitive disabilities. In particular, the objective is to design and evaluate a power wheelchair simulator as well as to design a multi-layer driving assistance system.
GENESIS - CHIST-ERA

Participants: Anatole Lécuyer [contact], Marc Macé, Ferran Argelaguet, Emile Savalle.

  • Title:
    LeveraGing nEuromarkers for Next-gEn immerSIve Systems
  • Duration:
    01/2022 - 12/2024
  • Coordinator:
    Centre de Recherche en Informatique, Signal et Automatique de Lille - France
  • Partners:
    Inria (France), ETH Zurich (Switzerland), Koç University (Turkey).
  • Inria contact:
    Anatole Lécuyer
  • Summary:
    Brain-Computer Interfaces (BCIs) enable the leveraging of cerebral activity of users in order to interact with computer systems. Originally designed for assisting motor-impaired people, a new trend is emerging towards the use of BCI for a larger audience using passive BCI systems, which are able to transparently provide information regarding the users’ mental states. Virtual Reality (VR) technology could largely benefit from inputs provided by passive BCIs. VR enables to immerse users in 3D computer-generated environments, in a way to make them feel present in the virtual space, allowing through complete control of the environment, to offer several applications ranging from training and education, to social networking and entertainment. Given the growing interest of society and major industrial groups‘ investments, VR is considered as a major revolution in Human-Computer Interaction. However, to this day, VR has not yet reached its predicted level of democratization and largely remains at the state of an entertaining experiment. This can be explained by the difficulty to characterize users’ mental state during interaction and the inherent lack of adaptation in the presentation of the virtual content. In fact, studies have shown that users experience VR in different ways. While approximately 60% of users experience “cybersickness”, which represents the set of deleterious symptoms that may occur after a prolonged use of virtual reality systems, users can also suffer from breaks in presence and immersion, due to rendering and interaction anomalies which can lead to a poor feeling of embodiment and incarnation towards their virtual avatars. In both cases user’s experience is severely impacted as VR experience strongly relies on the concepts of telepresence and immersion. The aim of this project is to pave the way to the new generation of VR systems leveraging the electrophysiological activity of the brain through a passive BCI to level-up the immersion in virtual environments. The objective is to provide VR systems with means to evaluate the users’ mental states through the real-time classification of EEG data. This will improve users’ immersion in VR by reducing or preventing cybersickness, and by increasing levels of embodiment through the real time adaptation of the virtual content to the users’ mental states as provided by the BCI. In order to reach this objective, the proposed methodology is to (i) investigate neurophysiological markers associated with early signs of cybersickness, as well as neuromarkers associated with the occurrence of VR anomalies; (ii) build on existing signal processing methods for the real-time classification of these markers associating them with corresponding mental states and (iii) provide mechanisms for the adaptation of the virtual content to the estimated mental states.

10.2 National initiatives

10.2.1 ANR

ANR GRASP-IT

Participants: Anatole Lécuyer [contact], Mélanie Cogné, Salomé Lefranc.

  • Duration:
    2020 - 2024
  • Coordinator:
    LORIA
  • Partners:
    Inria Rennes (Hybrid), Inria Sophia, PErSEUs, CHU Rennes, CHU Toulouse, IRR UGECAM-N, and Alchimies.
  • Summary:
    The GRASP-IT project aims to recover upper limb control improving the kinesthetic motor imagery (KMI) generation of post-stroke patients using a tangible and haptic interface within a gamified Brain-Computer Interface (BCI) training environment. This innovative KMI-based BCI will integrate complementary modalities of interactions such as tangible and haptic interactions in a 3D printable flexible orthosis. We propose to design and test usability (including efficacy towards the stimulation of the motor cortex) and acceptability of this multimodal BCI. The GRASP-IT project also proposes to design and integrate a gamified non-immersive virtual environment to interact with. This multimodal solution should provide a more meaningful, engaging and compelling stroke rehabilitation training program based on KMI production. In the end, the project will integrate and evaluate neurofeedbacks, within the gamified multimodal BCI in an ambitious clinical evaluation with 75 hemiplegic patients in 3 different rehabilitation centers in France.
PIA4 DemoES AIR

Participants: Valérie Gouranton [contact, resp. INSA Rennes], Bruno Arnaldi, Ronan Gaugne, Florian Nouviale, Alexandre Audinot, Adrien Reuzeau, Mathieu Risy.

  • Duration:
    12/2021 - 09/2025
  • Coordinator:
    Université Rennes 1
  • Partners:
    INSA Rennes, Université Rennes 1 et 2, Artefacto, Klaxoon
  • Description:
    The project Augmenter les Interactions à Rennes (AIR) is one of the 17 laureates chosen by the French government as part of the call for expressions of interest “Digital demonstrators in higher education” (DemoES) launched by the Ministry. of Higher Education, Research and Innovation. Designed to overcome the artificial opposition between social learning and digital, the AIR project is structured around 3 complementary axes:
    • An augmented campus to facilitate social interactions across all activities (training, services, exchanges and social relations) and ensure their continuum as an extension of physical campuses, implemented in partnership with Orange Labs, a member of the consortium, with the support for other EdTech players such as Appscho or Jalios.
    • Interactive pedagogies to increase interactions in training and optimize, through interactivity, learning, ranging from the development of serious games to the use of immersive technologies (virtual reality, augmented reality, mixed reality), by developing functionalities resulting from projects research carried out within the Hybrid team, in Irisa, by intensifying the partnership established since 2018 with Klaxoon or by relying on Artefacto's immersive solutions.
    • An ecosystem of support for pedagogical and digital transformations to promote the appropriation by teachers of these new large-scale devices, in particular thanks to the time allocated dedicated to these transformations and to offer a recomposed and plural proximity assistance to teaching teams.
PIA4 Equipex+ Continuum

Participants: Ronan Gaugne [contact], Valérie Gouranton, Florian Nouviale.

  • Duration:
    06/2021 - 05/2028
  • Coordinator:
    CNRS
  • Description:

    CONTINUUM is an 8 years EquipEx + project led by the CNRS as part of the 4th Future Investments Program (PIA4). Endowed with € 13.6M, the project will create a collaborative research infrastructure of 30 platforms located throughout France, in order to advance interdisciplinary research between IT and the human and social sciences. Through CONTINUUM, 37 research teams will develop cutting-edge research focused on visualization, immersion, interaction and collaboration, as well as human perception, cognition and behavior in virtual augmented reality. CONTINUUM is organized along two axes:

    1. Interdisciplinary research on the interaction, in collaboration between computing and human and social sciences, in order to increase knowledge and solutions in human-centered computing;
    2. Deployment of tools and services to meet the needs of many scientific fields in terms of access to big data, simulations and virtual / augmented experiences (mathematics, physics, biology, engineering, computer science, medicine, psychology, didactics, history , archeology, sociology, etc.)

    By developing the instrument itself and using it in different fields of application, CONTINUUM will promote interdisciplinary research in order to better understand how to interact with the digital world and to enable advances in other fields of science and technology. 'engineering.

10.2.2 Inria projects

Inria Challenge AVATAR

Participants: Ferran Argelaguet [contact], Anatole Lécuyer, Maé Mavromatis.

  • Duration:
    2018 - 2022
  • Coordinator:
    MimeTIC
  • Partners:
    Hybrid, Potioc, Loki, Graphdeco, Morpheo
  • External partners:
    Univ. Bacelona, Faurecia and InderDigital
  • Description:
    AVATAR aims at designing avatars (i.e., the user’s representation in virtual environments) that are better embodied, more interactive and more social, through improving all the pipeline related to avatars, from acquisition and simulation, to designing novel interaction paradigms and multi-sensory feedback.
  • URL:
Inria Challenge NAVISCOPE

Participants: Ferran Argelaguet [contact], Gwendal Fouché.

  • Duration:
    2019 - 2023
  • Coordinator:
    Serpico
  • Partners:
    Aviz, Beagle, Hybrid, Mosaic, Parietal, Morpheme
  • External partners:
    INRA and Institute Curie
  • Description:
    NAVISCOPE aims at improving visualization and machine learning methods in order to provide systems capable to assist the scientist to obtain a better understanding of massive amounts of information.
  • URL:
Inria Covid-19 Project VERARE

Participants: Mélanie Cogné [contact], Anatole Lécuyer, Justine Saint-Aubert, Valérie Gouranton, Ferran Argelaguet, Florian Nouviale, Ronan Gaugne.

VERARE (Virtual Environments for Rehabilitation After REsuscitation) is a 2-year research project funded by Inria (specific Inria Covid-19 call for projects) for assessing the efficacy of using Virtual Reality for motor rehabilitation (improving walk recovery) after resuscitation. This ambitious clinical project gathers Hybrid team, federating 18 members of the team, and the University Hospital of Rennes (Physical and Rehabilitation Medicine Unit and Intensive Care Units).

10.3 Regional initiatives

Rennes Métropole, “Créativite Croisée”, the secret of Bastet

Participants: Valérie Gouranton [contact], Ronan Gaugne, Florian Nouviale.

  • Duration:
    2022-2023
  • Coordinator:
    INSA Rennes
  • Partners:
    Université Rennes 1, Inrap, UMR Trajectoires, Musée des Beaux-arts de Rennes, Orange Labs, Polymporph
  • Description:
    The project “The Secret of Bastet” aims to enrich the museography of the Egyptological section of the Musée des Beaux-arts de Rennes by integrating an innovative museographic device of Virtual Reality allowing a better understanding of the archaeological object for the general public. More precisely, our approach is original because it will allow the visitor to access the scientific content reserved until now to the experts, to experiment the process of the scientific approach by discovering the new digital techniques of exploration and this in a playful, educational, immersive and interactive way, without harming the individual emotion. For this project, the device will relate the museum's objects from the collection of Egyptian animal mummies and specifically one of the museum's flagship works, the Cat Mummy as well as its transparent 3D printing replica from a previous project.
Rennes Métropole, “Créativite Croisée”, Event by Eleven

Participants: Ronan Gaugne [contact], Florian Nouviale.

  • Duration:
    2022-2023
  • Coordinator:
    Danse 34 Productions
  • Partners:
    Université Rennes 1, Golaem, N+N Corsino
  • Description:
    Event by eleven is a choreographic navigation generated by a development in Artificial Intelligence, that proposes a real time interaction with the public. Eleven performers, real cloned dancers, wander through an abstract and changing space.Facing a large panoramic screen, each spectator is able to choose one of the interpreters among eleven possibilities thanks to a motion based interaction designed and developed by Hybrid team. The choreography composed by the journeys of these eleven performers evolves over time according to the choices of the public and this evolution is managed algorithmically. The trajectories of the dancers are organized over the course of the work and determine a random basis for the choreography. The resulting creation was presented in Belle de Mai, Marseille, in November 2022.
PULSAR – Académie des jeunes chercheurs en Pays de la Loire - PAPA

Participants: Rebecca Fribourg [contact].

  • Title:
    Prospection sur les Avatars et leur influence sur la Perception de l'Ambiance urbaine
  • Duration:
    2022- 2024
  • Coordinator:
    Région Pays de la Loire and Ecole Centrale de Nantes
  • Description:
    This project aims at using virtual reality to immerse users in virtual urban spaces and to transmit specific urban ambiences that result from the perception of the morphology and physical characteristics of the city (lights, sounds, materials, scales, etc.). The study of the perception of urban ambiences allows to anticipate reality and to interpret future environments in a subjective approach, which is a major help for urban planners when trying to understand the relationship between humans and the environment. However, current virtual urban spaces often fail to convey a realistic perception of atmosphere and space, as they are often still images or models without "real" life. The goal of this project is therefore to bring "life" into virtual urban spaces to improve the perception of urban atmosphere. We focus on the question of whether urban ambience can be influenced by using avatars to represent users in the virtual environment. Since urban spaces are populated with inhabitants, the representation of multiple users with avatars or the involvement of agents in the virtual urban space seems crucial to convey realistic atmospheres and provide convincing immersions in the urban space. We believe that this is one of the greatest strengths of Virtual Reality applied to this domain, and it has not been explored so far.

11 Dissemination

Participants: Anatole Lécuyer, Ferran Argelaguet, Bruno Arnaldi, Mélanie Cogné, Rebecca Fribourg, Ronan Gaugne, Valérie Gouranton, Marc Macé, Jean-Marie Normand, Guillaume Moreau, Florian Nouviale, Léa Pillete.

11.1 Promoting scientific activities

11.1.1 Conferences

General chair, scientific chair
  • Guillaume Moreau was Program Chair for the journal track of IEEE ISMAR.
Chair of conference program committees
  • Ferran Argelaguet was Program chair for IEEE VR (Journal track) 2022.
  • Jean-Marie Normand was Program Chair of ICAT-EGVE 2022, the joint international conference of the 32nd International Conference on Artificial Reality and Telexistence & the 27th Eurographics Symposium on Virtual Environments.
Member of the conference program committees
  • Anatole Lécuyer was member of the IEEE VR 2022 IPC.
  • Ferran Argelaguet was member of the ISMAR 2022 IPC.
  • Rebecca Fribourg was member of the ICAT-EGVE 2022 IPC.
  • Valérie Gouranton was member of the IEEE VR 2022 and HUCAPP 2022 IPCs.
  • Jean-Marie Normand was member of the IEEE ISMAR 2022 and the IEEE VR 2022 IPCs.
Reviewer
  • Ferran Argelaguet was reviewer for ACM CHI 2022, EuroXR 2022 and IEEE VR 2022.
  • Rebecca Fribourg was a reviewer for IEEE VR 2022, IEEE ISMAR 2022, ICAT-EGVE 2022.
  • Marc Macé was reviewer for IEEE VR 2022.
  • Guillaume Moreau was reviewer for IEEE VR 2022.
  • Jean-Marie Normand was reviewer for IEEE VR 2022, IEEE ISMAR 2022, ICAT-EGVE 2022.

11.1.2 Journals

Member of the editorial boards
  • Anatole Lécuyer served as associate editor for journals Presence, and Frontiers in Virtual Reality
  • Ferran Argelaguet is Review Editor of Frontiers in Virtual Reality.
  • Valérie Gouranton is Review Editor of Frontiers in Virtual Reality.
  • Jean-Marie Normand is Review Editor of Frontiers in Virtual Reality and Review Editor of Frontiers in Neuroergonomics.
Reviewer - reviewing activities
  • Ferran Argelaguet was reviewer for IEEE TVCG, Frontiers in Virtual Reality.
  • Mélanie Cogné was reviewer for the Annals of Physical and Rehabilitation Medicine journal.
  • Rebecca Fribourg was reviewer for IEEE TVCG.
  • Ronan Gaugne was reviewer for “Journal of Archaeological Science: Reports” (Elsevier) and “Sensors” (MDPI).
  • Valérie Gouranton was reviewer for Frontiers and Virtual Reality.
  • Jean-Marie Normand was reviewer for IEEE TVCG.

11.1.3 Invited talks

  • Anatole Lécuyer was keynote speaker at ICAT-EGVE 2022 and the 3rd ERCIM-JST workshop.
  • Ferran Argelaguet was invited speaker at the 3rd ERCIM-JST workshop. Title: “Creating Rich Tactile Virtual Reality Experiences using Electrotactile Feedback”
  • Rebecca Fribourg was invited speaker at Rencontres Mécatroniques 2022.
  • Rebecca Fribourg was invited speaker at Les Journées RV/RA et Sport du CREPS. Title: “Avoir une bonne expérience avec un avatar en réalité virtuelle afin de pouvoir l'utiliser comme outil de formation”.
  • Mélanie Cogné was an invited speaker at the APRROCHE 2022 congres (V4R track). Title: “VR for cognitive rehabilitation: state of the art”.
  • Marc Macé was invited speaker at the “journée GT RV/IA”. Title: “Environnements virtuels et modèles de décision pour l’interaction”.
  • Guillaume Moreau gave invited talks at CEA Marcoule and at Numica (Réseau du numérique et de l'IT en Champagne).

11.1.4 Leadership within the scientific community

  • Anatole Lécuyer is a member of the IEEE VR Steering committee.
  • Valérie Gouranton is a Member of the Consortium 3D of TGIR HumaNum.
  • Guillaume Moreau is a member of the IEEE ISMAR Steering committee.

11.1.5 Scientific expertise

  • Anatole Lécuyer was member of selection committee of INCR (Rennes Clinical Neuroscience Institute).
  • Ferran Argelaguet was member of the Inria Bordeaux CRCN selection jury.
  • Mélanie Cogné was a reviwer for a GIRCI-Grand Ouest project call.
  • Rebecca Fribourg was a jury member for Prix de thèse Gilles Kahn.
  • Valérie Gouranton was a member of the Conseil National des Universités 27th section (computer science), the ANR committee: factory of the future and the selection committee for an associate professor position CNU 18 at Université Rennes 2.
  • Guillaume Moreau is member of the scientific council of ENSIEE and of Ibisc Laboratory. He is also member of the Scientific Advisory Board of CLARTE. He serves as an administrator of Cominlabs on behalf of IMT Atlantique.
  • Jean-Marie Normand was member of the selection committee for a MCF position CNU 27 at the ENISE school.

11.1.6 Research administration

  • Ferran Argelaguet is member of the scientific committee of the EUR Digisport and the EquipEx+ Continuum.
  • Rebecca Fribourg is member of the scientific committee of Ecole Centrale de Nantes
  • Valérie Gouranton is Head of cross-cutting Axis “Art, Heritage & Culture” at IRISA UMR 6074 and she was an elected member of the IRISA laboratory council.
  • Guillaume Moreau is Deputy Dean of Research and Innovation at IMT Atlantique.

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

In this section, only courses related to the main research field of Hybrid are listed.

Anatole Lécuyer:

  • Master AI-ViC: “Haptic Interaction and Brain-Computer Interfaces”, 6h, M2, Ecole Polytechnique, FR
  • Master MNRV: “Haptic Interaction”, 9h, M2, ENSAM, Laval, FR
  • Master SIBM: “Haptic and Brain-Computer Interfaces”, 4.5h, M2, University of Rennes 1, FR
  • Master SIF: “Pseudo-Haptics and Brain-Computer Interfaces”, 6h, M2, INSA Rennes, FR

Bruno Arnaldi:

  • Master SIF: “VRI: Virtual Reality and Multi-Sensory Interaction Course”, 4h, M2, INSA Rennes, FR
  • Master INSA Rennes: “CG: Computer Graphics”, 12h, M2, INSA Rennes, FR
  • Master INSA Rennes: “Virtual Reality”, courses 6h, projects 16h, M1 and M2, INSA Rennes, FR
  • Master INSA Rennes: Projects on “Virtual Reality”, 50h, M1, INSA Rennes, FR

Ferran Argelaguet:

  • Master STS Informatique: “Techniques d'Interaction Avancées”, 26h, M2, ISTIC, University of Rennes 1, FR
  • Master SIF: “Virtual Reality and Multi-Sensory Interaction”, 8h, M2, INSA Rennes, FR
  • Master SIF: “Data Mining and Visualization”, 2h, M2, University of Rennes 1, FR
  • Master AI-ViC: “Virtual Reality and 3D Interaction”, 3h, M2, École Polytechnique, FR

Valérie Gouranton:

  • Licence: Project on “Virtual Reality”, 28h, L3, responsible of this lecture, INSA Rennes, FR
  • Master INSA Rennes: “Virtual Reality”, 22h, M2, INSA Rennes, FR
  • Master INSA Rennes: Projects on “Virtual Reality”, 60h, M1, INSA Rennes, FR
  • Master CN: “Virtual Reality”, 3h, M1, University of Rennes 2, FR
  • Responsible for international relations, INSA Rennes, Computer science department

Ronan Gaugne:

  • INSA Rennes: Projects on “Virtual Reality”, 24h, L3, INSA Rennes, FR
  • Master Digital Creation: “Virtual Reality”, 6h, M1, University of Rennes 2, FR

Jean-Marie Normand:

  • Virtual Reality Major, “Computer Graphics”, 24h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Fundamentals of Virtual Reality”, 20h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Computer Vision and Augmented Reality”, 25h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Advanced Concepts in VR/AR”, 24h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Projects on Virtual Reality”, 20h, M1/M2, École Centrale de Nantes, FR
  • Master MTI3D: “Virtual Embodiment”, 3h, M2, ENSAM Laval, FR

Rebecca Fribourg:

  • Virtual Reality Major, “C++ Programming for VR”, 20h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Fundamentals of Virtual Reality”, 7h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Advanced Concepts in VR/AR”, 16h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Projects on Virtual Reality”, 15h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Projects in OpenGL C++”, 10h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Immersive Data Visualization”, 12h, M1/M2, École Centrale de Nantes, FR

11.2.2 Supervision

  • PhD: Nicolas Olivier “Avatar stylization”. Defended in March 2022. Supervised by Franck Multon (MimeTIC, Inria) and Ferran Argelaguet.
  • PhD: Gwendal Fouché, “Immersive Interaction and Visualization of Temporal 3D Data”. Defended in December 2022, Supervised by Ferran Argelaguet, Charles Kervrann (Serpico, Inria) and Emmanuelle Faure (Mosaic, Inria).
  • PhD: Adélaïde Genay, “Embodiment in Augmented Reality”. Defended in December 2022. Supervised by Anatole Lécuyer, Martin Hachet (Potioc, Inria).
  • PhD: Sebastian Vizcay, “Dexterous Interaction in Virtual Reality using High-Density Electrotactile Feedback”. Defended in December 2022. Supervised by Ferran Argelaguet, Maud Marchal and Claudio Pacchierotti (Rainbow, Inria).
  • PhD: Salomé Le Franc, “Haptic Neurofeedback Design for Stroke”. Defended in September 2022. Supervised by Anatole Lécuyer, Isabelle Bonan (CHU Rennes) and Mélanie Cogné.
  • PhD in progress: Martin Guy, “Physiological markers for characterizing virtual embodiment”, Started in October 2019, Supervised by Guillaume Moreau (IMT Atlantique), Jean-Marie Normand (ECN) and Camille Jeunet (CNRS, INCIA).
  • PhD in progress: Grégoire Richard, “Touching Avatars: The role of haptic feedback in virtual embodiment”, Started in October 2019, Supervised by Géry Casiez (Loki, Inria), Thomas Pietzrak (Loki, Inria), Anatole Lécuyer and Ferran Argelaguet.
  • PhD in progress: Lysa Gramoli, “Simulation of autonomous agents in connected virtual environments”, Started in October 2020, Supervised by Valérie Gouranton, Bruno Arnaldi, Jérémy Lacoche (Orange), Anthony Foulonneau (Orange).
  • PhD in progress: Vincent Goupil, “Hospital 2.0: Generation of Virtual Reality Applications by BIM Extraction”, Started in October 2020, Supervised by Valérie Gouranton, Bruno Arnaldi, Anne-Solène Michaud (Vinci Construction).
  • PhD in progress: Gabriela Herrera “Neurofeefdback based on VR and haptics”, Started in January 2021. Supervised by Laurent Bougrain (LORIA), Stéphanie Fleck (Univ. Lorraine), and Anatole Lécuyer.
  • PhD in progress: Maé Mavromatis, “Towards Avatar-Friendly Characterization of VR Interaction Methods”, Started in October 2021, Supervised by Anatole Lécuyer, Ferran Argelaguet and Ludovic Hoyet (MimeTIC, Inria).
  • PhD in progress: Antonin Cheymol, “Body-based Interfaces in Mixed Reality for Urban Applications”, Started in November 2021, Supervised by Anatole Lécuyer, Ferran Argelaguet, Jean-Marie Normand (ECN) and Rebecca Fribourg (ECN).
  • PhD in progress: Yann Moullec, “Walking Sensations in VR”, Started in October 2021, Supervised by Anatole Lécuyer and Mélanie Cogné.
  • PhD in progress: Emilie Hummel, “Rehabilitation post-Cancer based on VR”, Started in October 2021, Supervised by Anatole Lécuyer, Valérie Gouranton and Mélanie Cogné.
  • PhD in progress: Jeanne Hecquard, “Affective Haptics in Virtual Reality”, Started in October 2022, Supervised by Marc Macé, Anatole Lécuyer, Ferran Argelaguet and Claudio Pacchierotti (Rainbow, Inria).
  • PhD in progress: Emile Savalle, “Cybersickness assessment in Virtual Reality using Neurophysiology”, Started in October 2022, Supervised by Marc Macé, Anatole Lécuyer, Ferran Argelaguet and Léa Pillete.
  • PhD in progress: Mathieu Risy, “Pedagogical models in Virtual Reality training environments”, Started in October 2022, Supervised by Valérie Gouranton.
  • PhD in progress: Sony Saint-Auret, “Collaborative real tennis in virtual reality”, Started in November 2022, Supervised by Valérie Gouranton, Franck Multon and Richard Kulpa (Mimetic, Inria), Ronan Gaugne.
  • PhD in progress: Julien Lomet, “Cocreation of a virtual reality artwork, from the artist to the viewer”, Started in October 2022, Supervised by Valérie Gouranton, Ronan Gaugne, Joêl Laurent (UR2), Cédric Plessiet (Université Paris 8).

11.2.3 Juries

  • Anatole Lécuyer was reviewer for the PhD thesis of André Zenner. He was president for the PhD theses of Adèle Colas, Vamsi Guda and Anne-Laure Guinet.
  • Ferran Argelaguet was reviewer for the PhD thesis of Flavien Lebrun.
  • Valérie Gouranton was reviewer for the PhD thesis of Pierre Mahieux, ENIB.
  • Guillaume Moreau was president of the jury for the PhD thesis of Pierre Bégout.
  • Marc Macé was reviewer for the HDR thesis of Damien Gabriel, INSERM, CHRU Besançon.

11.3 Popularization

  • Conference at Digital Tech Conference “Le Metavers, innovation (ir)responsable” (Ferran Argelaguet) - December 2022
  • Article in “Le Monde” newspaper (Anatole Lécuyer and Guillaume Moreau) - October 2022
  • Article in “The Conversation” journal (Bruno Arnaldi) - October 2022
  • Article in “La Recherche” journal on “Haptics” (Anatole Lécuyer, Justine Saint-aubert) - September 2022
  • Article in “La Recherche” journal on “Avatars” (Rebecca Fribourg, Jean-Marie Normand) - September 2022
  • Article in “La Recherche” journal on “Archeology and VR” (Ronan Gaugne, Valérie Gouranton) - September 2022
  • TedX Talk from Rebecca Fribourg at Tedx Rennes - 2022
  • Conference on “Métavers et RV” (Anatole Lécuyer) at Cité des Sciences - September 2022
  • Podcast for Alliancy Mag on “Virtual Reality” (Anatole Lécuyer) - September 2022
  • Webinar for BPI France on “Metaverse et technologies RV” (Anatole Lécuyer) - September 2022
  • Conference at Digital Week “Débat public sur Métavers, l’attraction du virtuel” (Jean-Marie Normand) - September 2022
  • Article in “Etapes” journal (Anatole Lécuyer) - August 2022
  • Conference at TimeWorld 2022 (Anatole Lécuyer) - May 2022
  • TV appearance on “France3 TV channel" on “VR and archeology” (Valérie Gouranton) - March 2022
  • Article in “Sciences et Vie” on “VR” (Anatole Lécuyer) - January 2022
  • Article in I'MTech “Quésaco le Metavers ?” (Guillaume Moreau) - April 2022
  • Article in “Ca m'intéresse” “C'est quoi le Métavers ?” (Guillaume Moreau) - May 2022
  • Article in “Monaco Hebdo” about the metaverse (Guillaume Moreau) - February 2022

12 Scientific production

12.1 Major publications

  • 1 inproceedingsF.Ferran Argelaguet Sanz, L.Ludovic Hoyet, M.Michaël Trico and A.Anatole Lécuyer. The role of interaction in virtual embodiment: Effects of the virtual hand representation.IEEE Virtual RealityGreenville, United StatesMarch 2016, 3-10
  • 2 articleM.-S.Marie-Stéphanie Bracq, E.Estelle Michinov, B.Bruno Arnaldi, B.Benoît Caillaud, B.Bernard Gibaud, V.Valérie Gouranton and P.Pierre Jannin. Learning procedural skills with a virtual reality simulator An acceptability study.Nurse Education Today79August 2019, 153-160
  • 3 articleX.Xavier De Tinguy, C.Claudio Pacchierotti, A.Anatole Lécuyer and M.Maud Marchal. Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR.IEEE Transactions on Visualization and Computer GraphicsJanuary 2021
  • 4 articleM.Mathis Fleury, G.Giulia Lioi, C.Christian Barillot and A.Anatole Lécuyer. A Survey on the Use of Haptic Feedback for Brain-Computer Interfaces and Neurofeedback.Frontiers in Neuroscience1June 2020
  • 5 articleR.Rebecca Fribourg, F.Ferran Argelaguet Sanz, A.Anatole Lécuyer and L.Ludovic Hoyet. Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View.IEEE Transactions on Visualization and Computer Graphics265May 2020, 2062-2072
  • 6 articleR.Ronan Gaugne, F.Françoise Labaune-Jean, D.Dominique Fontaine, G.Gaétan Le Cloirec and V.Valérie Gouranton. From the engraved tablet to the digital tablet, history of a fifteenth century music score.Journal on Computing and Cultural Heritage1332020, 1-18
  • 7 articleF.Flavien Lécuyer, V.Valérie Gouranton, A.Aurélien Lamercerie, A.Adrien Reuzeau, B.Bruno Arnaldi and B.Benoît Caillaud. Unveiling the implicit knowledge, one scenario at a time.Visual Computer2020, 1-12
  • 8 inproceedingsF.Flavien Lécuyer, V.Valérie Gouranton, A.Adrien Reuzeau, R.Ronan Gaugne and B.Bruno Arnaldi. Create by doing - Action sequencing in VR.CGI 2019 - Computer Graphics International, Advances in Computer GraphicsCalgary, CanadaSpringer International PublishingJune 2019, 329-335
  • 9 articleV.Victor Mercado, M.Maud Marchal and A.Anatole Lécuyer. ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating Props.IEEE Transactions on Visualization and Computer Graphics273March 2021, 2237-2243
  • 10 bookG.Guillaume Moreau, B.Bruno Arnaldi and P.Pascal Guitton. Virtual Reality, Augmented Reality: myths and realities.Computer engineering seriesISTEMarch 2018, 322
  • 11 articleT.Théophane Nicolas, R.Ronan Gaugne, C.Cédric Tavernier, Q.Quentin Petit, V.Valérie Gouranton and B.Bruno Arnaldi. Touching and interacting with inaccessible cultural heritage.Presence: Teleoperators and Virtual Environments2432015, 265-277
  • 12 inproceedingsE.Etienne Peillard, Y.Yuta Itoh, J.-M.Jean-Marie Normand, F.Ferran Argelaguet Sanz, G.Guillaume Moreau and A.Anatole Lécuyer. Can Retinal Projection Displays Improve Spatial Perception in Augmented Reality?ISMAR 2020 - 19th IEEE International Symposium on Mixed and Augmented Reality2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)Recife, BrazilIEEENovember 2020, 124-133
  • 13 inproceedingsE.Etienne Peillard, T.Thomas Thebaud, J.-M.Jean-Marie Normand, F.Ferran Argelaguet Sanz, G.Guillaume Moreau and A.Anatole Lécuyer. Virtual Objects Look Farther on the Sides: The Anisotropy of Distance Perception in Virtual Reality.VR 2019 - 26th IEEE Conference on Virtual Reality and 3D User InterfacesOsaka, JapanIEEEMarch 2019, 227-236
  • 14 articleH.Hakim Si-Mohammed, J.Jimmy Petit, C.Camille Jeunet, F.Ferran Argelaguet Sanz, F.Fabien Spindler, A.Andéol Evain, N.Nicolas Roussel, G.Géry Casiez and A.Anatole Lécuyer. Towards BCI-based Interfaces for Augmented Reality: Feasibility, Design and Evaluation.IEEE Transactions on Visualization and Computer Graphics263March 2020, 1608-1621

12.2 Publications of the year

International journals

International peer-reviewed conferences

Conferences without proceedings

  • 39 inproceedingsA.Adélaïde Genay, A.Anatole Lécuyer and M.Martin Hachet. What Can I Do There? Controlling AR Self-Avatars to Better Perceive Affordances of the Real World.ISMAR 2022 - 21st IEEE International Symposium on Mixed and Augmented RealitySingapore, SingaporeIEEEOctober 2022, 1-10
  • 40 inproceedingsV.Vincent Goupil, V.V Gouranton, A.-S.Anne-Solène Michaud, J.-Y.Jean-Yves Gauvrit and B.B Arnaldi. A BIM-based model to study wayfinding signage using virtual reality.WBC 2022 - CIB World Building CongressMelbourne, AustraliaJune 2022, 1-10
  • 41 inproceedingsL.Lysa Gramoli, J.Jérémy Lacoche, A.Anthony Foulonneau, V.Valérie Gouranton and B.Bruno Arnaldi. Control Your Virtual Agent in its Daily-activities for Long Periods.PAAMS 2022 - 20th International Conference on Practical Applications of Agents and Multi-Agent Systems13616Lecture Notes in Computer ScienceL'Aquila, ItalySpringer International PublishingOctober 2022, 203-216
  • 42 inproceedingsJ.Julien Lomet, R.Ronan Gaugne and V.Valérie Gouranton. Could you relax in an artistic co-creative virtual reality experience?ICAT-EGVE, joint international conference of the the 32nd International Conference on Artificial Reality and Telexistence & the 27th Eurographics Symposium on Virtual Environments.Yokohama, JapanNovember 2022, 1-9
  • 43 inproceedingsH.Hakim Si-Mohammed, C.Coralie Haumont, A.Alexandre Sanchez, C.Cyril Plapous, F.Foued Bouchnak, J.-P.Jean-Philippe Javaudin and A.Anatole Lécuyer. Designing Functional Prototypes Combining BCI and AR for Home Automation.EuroXR 2022 - International Conference on Virtual Reality and Mixed Reality13484Lecture Notes in Computer ScienceStuttgart, GermanySpringer International PublishingSeptember 2022, 3-21

Scientific book chapters

Doctoral dissertations and habilitation theses

  • 46 thesisV.Valérie Gouranton. Modèles et Outils pour la Production d'Applications de RX, Focus sur le Patrimoine Culturel (HdR).Université Rennes 1March 2022

12.3 Cited publications

  • 47 bookD. A.Doug A Bowman, E.Ernest Kruijff, J. J.Joseph J LaViola and I.Ivan Poupyrev. 3D User Interfaces: Theory and Practice.Addison Wesley2004
  • 48 articleA.Anatole Lécuyer. Simulating Haptic Feedback Using Vision: A Survey of Research and Applications of Pseudo-Haptic Feedback.Presence: Teleoperators and Virtual Environments1812009, 39--53