EN FR
EN FR
2020
Activity report
Project-Team
HYBRID
RNSR: 201322122U
In partnership with:
Institut national des sciences appliquées de Rennes, Université Rennes 1
Team name:
3D interaction with virtual environments using body and mind
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Domain
Perception, Cognition and Interaction
Theme
Interaction and visualization
Creation of the Team: 2013 January 01, updated into Project-Team: 2013 July 01

Keywords

Computer Science and Digital Science

  • A2.5. Software engineering
  • A5.1. Human-Computer Interaction
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.3. Haptic interfaces
  • A5.1.4. Brain-computer interfaces, physiological computing
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.7. Multimodal interfaces
  • A5.1.8. 3D User Interfaces
  • A5.1.9. User and perceptual studies
  • A5.2. Data visualization
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A6. Modeling, simulation and control
  • A6.2. Scientific computing, Numerical Analysis & Optimization
  • A6.3. Computation-data interaction

Other Research Topics and Application Domains

  • B1.2. Neuroscience and cognitive science
  • B2.4. Therapies
  • B2.5. Handicap and personal assistances
  • B2.6. Biological and medical imaging
  • B2.8. Sports, performance, motor skills
  • B5.1. Factory of the future
  • B5.2. Design and manufacturing
  • B5.8. Learning and training
  • B5.9. Industrial maintenance
  • B6.4. Internet of things
  • B8.1. Smart building/home
  • B8.3. Urbanism and urban planning
  • B9.1. Education
  • B9.2. Art
  • B9.2.1. Music, sound
  • B9.2.2. Cinema, Television
  • B9.2.3. Video games
  • B9.4. Sports
  • B9.6.6. Archeology, History

1 Team members, visitors, external collaborators

Research Scientists

  • Anatole Lécuyer [Team leader, Inria, Senior Researcher, HDR]
  • Fernando Argelaguet Sanz [Inria, Researcher]

Faculty Members

  • Bruno Arnaldi [INSA Rennes, Professor, HDR]
  • Melanie Cogné [Univ de Rennes I, Hospital Staff]
  • Valérie Gouranton [INSA Rennes, Associate Professor]
  • Maud Marchal [INSA Rennes, Associate Professor, until Mar 2020, HDR]

Post-Doctoral Fellows

  • Thomas Howard [CNRS]
  • Panagiotis Kourtesis [Inria, from Oct 2020]
  • Justine Saint-Aubert [Inria, from Apr 2020]

PhD Students

  • Guillaume Bataille [Orange Labs, until Sep 2020]
  • Antonin Bernardin [INSA Rennes, until Jun 2020]
  • Hugo Brument [Univ de Rennes I]
  • Xavier De Tinguy De La Girouliere [INSA Rennes, until Aug 2020]
  • Diane Dewez [Inria]
  • Mathis Fleury [Inria]
  • Gwendal Fouché [Inria]
  • Rebecca Fribourg [Inria]
  • Adelaïde Genay [Inria]
  • Vincent Goupil [Vinci Construction, CIFRE, from Nov 2020]
  • Lysa Gramoli [Orange Labs, CIFRE, from Oct 2020]
  • Martin Guy [ECN]
  • Romain Lagneau [INSA Rennes, until Jun 2020]
  • Salome Le Franc [Centre hospitalier régional et universitaire de Rennes]
  • Flavien Lecuyer [INSA Rennes, until Aug 2020]
  • Tiffany Luong [Institut de recherche technologique B-com, CIFRE]
  • Victor Rodrigo Mercado Garcia [Inria]
  • Nicolas Olivier [InterDigital, CIFRE]
  • Etienne Peillard [Inria, until Nov 2020]
  • Grégoire Richard [Inria]
  • Romain Terrier [Institut de recherche technologique B-com, until Sep 2020]
  • Guillaume Vailland [INSA Rennes]
  • Sebastian Vizcay [Inria]

Technical Staff

  • Alexandre Audinot [INSA Rennes, Engineer]
  • Ronan Gaugne [Univ de Rennes I, Engineer]
  • Thierry Gaugry [INSA Rennes, Engineer, until Jun 2020]
  • Florian Nouviale [INSA Rennes, Engineer]
  • Thomas Prampart [Inria, Engineer]
  • Adrien Reuzeau [INSA Rennes, Engineer, until Sep 2020]
  • Hakim Si Mohammed [Inria, Engineer, until Jun 2020]

Interns and Apprentices

  • Bastien Daniel [INSA Rennes, Intern, from Jun 2020 until Sep 2020]
  • Loic Delabrouille [École normale supérieure de Rennes, Intern, from Jun 2020 until Aug 2020]
  • Quentin Denis-Lutard [Inria, Intern, from May 2020 until Aug 2020]
  • Rafik Drissi [Univ de Rennes I, Intern, until Sep 2020]
  • Timothee Durgeaud [INSA Rennes, Intern, from Jun 2020 until Aug 2020]
  • Johan Julien [INSA Rennes, Intern, from May 2020 until Aug 2020]
  • Meven Leblanc [INSA Rennes, Intern, from Jun 2020 until Sep 2020]
  • Thomas Rinnert [Inria, Intern, until Mar 2020]
  • Killian Tiroir [INSA Rennes, Intern, from Apr 2020 until Aug 2020]

Administrative Assistant

  • Nathalie Denis [Inria]

External Collaborators

  • Guillaume Moreau [École centrale de Nantes, HDR]
  • Jean Marie Normand [École centrale de Nantes]

2 Overall objectives

Our research project belongs to the scientific field of Virtual Reality (VR) and 3D interaction with virtual environments. VR systems can be used in numerous applications such as for industry (virtual prototyping, assembly or maintenance operations, data visualization), entertainment (video games, theme parks), arts and design (interactive sketching or sculpture, CAD, architectural mock-ups), education and science (physical simulations, virtual classrooms), or medicine (surgical training, rehabilitation systems). A major change that we foresee in the next decade concerning the field of Virtual Reality relates to the emergence of new paradigms of interaction (input/output) with Virtual Environments (VE).

As for today, the most common way to interact with 3D content still remains by measuring user's motor activity, i.e., his/her gestures and physical motions when manipulating different kinds of input device. However, a recent trend consists in soliciting more movements and more physical engagement of the body of the user. We can notably stress the emergence of bimanual interaction, natural walking interfaces, and whole-body involvement. These new interaction schemes bring a new level of complexity in terms of generic physical simulation of potential interactions between the virtual body and the virtual surrounding, and a challenging "trade-off" between performance and realism. Moreover, research is also needed to characterize the influence of these new sensory cues on the resulting feelings of "presence" and immersion of the user.

Besides, a novel kind of user input has recently appeared in the field of virtual reality: the user's mental activity, which can be measured by means of a "Brain-Computer Interface" (BCI). Brain-Computer Interfaces are communication systems which measure user's electrical cerebral activity and translate it, in real-time, into an exploitable command. BCIs introduce a new way of interacting "by thought" with virtual environments. However, current BCI can only extract a small amount of mental states and hence a small number of mental commands. Thus, research is still needed here to extend the capacities of BCI, and to better exploit the few available mental states in virtual environments.

Our first motivation consists thus in designing novel “body-based” and “mind-based” controls of virtual environments and reaching, in both cases, more immersive and more efficient 3D interaction.

Furthermore, in current VR systems, motor activities and mental activities are always considered separately and exclusively. This reminds the well-known “body-mind dualism” which is at the heart of historical philosophical debates. In this context, our objective is to introduce novel “hybrid” interaction schemes in virtual reality, by considering motor and mental activities jointly, i.e., in a harmonious, complementary, and optimized way. Thus, we intend to explore novel paradigms of 3D interaction mixing body and mind inputs. Moreover, our approach becomes even more challenging when considering and connecting multiple users which implies multiple bodies and multiple brains collaborating and interacting in virtual reality.

Our second motivation consists thus in introducing a “hybrid approach” which will mix mental and motor activities of one or multiple users in virtual reality.

3 Research program

The scientific objective of Hybrid team is to improve 3D interaction of one or multiple users with virtual environments, by making full use of physical engagement of the body, and by incorporating the mental states by means of brain-computer interfaces. We intend to improve each component of this framework individually, but we also want to improve the subsequent combinations of these components.

The "hybrid" 3D interaction loop between one or multiple users and a virtual environment is depicted in Figure 1. Different kinds of 3D interaction situations are distinguished (red arrows, bottom): 1) body-based interaction, 2) mind-based interaction, 3) hybrid and/or 4) collaborative interaction (with at least two users). In each case, three scientific challenges arise which correspond to the three successive steps of the 3D interaction loop (blue squares, top): 1) the 3D interaction technique, 2) the modeling and simulation of the 3D scenario, and 3) the design of appropriate sensory feedback.

3D hybrid interaction loop between one or multiple users and a virtual reality system. Top (in blue) three steps of 3D interaction with a virtual environment: (1-blue) interaction technique, (2-blue) simulation of the virtual environment, (3-blue) sensory feedbacks. Bottom (in red) different cases of interaction: (1-red) body-based, (2-red) mind-based, (3-red) hybrid, and (4-red) collaborative 3D interaction.
Figure 1: 3D hybrid interaction loop between one or multiple users and a virtual reality system. Top (in blue) three steps of 3D interaction with a virtual environment: (1-blue) interaction technique, (2-blue) simulation of the virtual environment, (3-blue) sensory feedbacks. Bottom (in red) different cases of interaction: (1-red) body-based, (2-red) mind-based, (3-red) hybrid, and (4-red) collaborative 3D interaction.

The 3D interaction loop involves various possible inputs from the user(s) and different kinds of output (or sensory feedback) from the simulated environment. Each user can involve his/her body and mind by means of corporal and/or brain-computer interfaces. A hybrid 3D interaction technique (1) mixes mental and motor inputs and translates them into a command for the virtual environment. The real-time simulation (2) of the virtual environment is taking into account these commands to change and update the state of the virtual world and virtual objects. The state changes are sent back to the user and perceived by means of different sensory feedbacks (e.g., visual, haptic and/or auditory) (3). The sensory feedbacks are closing the 3D interaction loop. Other users can also interact with the virtual environment using the same procedure, and can eventually “collaborate” by means of “collaborative interactive techniques” (4).

This description is stressing three major challenges which correspond to three mandatory steps when designing 3D interaction with virtual environments:

  • 3D interaction techniques: This first step consists in translating the actions or intentions of the user (inputs) into an explicit command for the virtual environment. In virtual reality, the classical tasks that require such kinds of user command were early categorized in four 69: navigating the virtual world, selecting a virtual object, manipulating it, or controlling the application (entering text, activating options, etc). The addition of a third dimension, the use of stereoscopic rendering and the use of advanced VR interfaces make however inappropriate many techniques that proved efficient in 2D, and make it necessary to design specific interaction techniques and adapted tools. This challenge is here renewed by the various kinds of 3D interaction which are targeted. In our case, we consider various cases, with motor and/or cerebral inputs, and potentially multiple users.
  • Modeling and simulation of complex 3D scenarios: This second step corresponds to the update of the state of the virtual environment, in real-time, in response to all the potential commands or actions sent by the user. The complexity of the data and phenomena involved in 3D scenarios is constantly increasing. It corresponds for instance to the multiple states of the entities present in the simulation (rigid, articulated, deformable, fluids, which can constitute both the user’s virtual body and the different manipulated objects), and the multiple physical phenomena implied by natural human interactions (squeezing, breaking, melting, etc). The challenge consists here in modeling and simulating these complex 3D scenarios and meeting, at the same time, two strong constraints of virtual reality systems: performance (real-time and interactivity) and genericity (e.g., multi-resolution, multi-modal, multi-platform, etc).
  • Immersive sensory feedbacks: This third step corresponds to the display of the multiple sensory feedbacks (output) coming from the various VR interfaces. These feedbacks enable the user to perceive the changes occurring in the virtual environment. They are closing the 3D interaction loop, making the user immersed, and potentially generating a subsequent feeling of presence. Among the various VR interfaces which have been developed so far we can stress two kinds of sensory feedback: visual feedback (3D stereoscopic images using projection-based systems such as CAVE systems or Head Mounted Displays); and haptic feedback (related to the sense of touch and to tactile or force-feedback devices). The Hybrid team has a strong expertice in haptic feedback, and in the design of haptic and “pseudo-haptic” rendering 70. Note that a major trend in the community, which is strongly supported by the Hybrid team, relates to a “perception-based” approach, which aims at designing sensory feedbacks which are well in line with human perceptual capacities.

These three scientific challenges are addressed differently according to the context and the user inputs involved. We propose to consider three different contexts, which correspond to the three different research axes of the Hybrid research team, namely: 1) body-based interaction (motor input only), 2) mind-based interaction (cerebral input only), and then 3) hybrid and collaborative interaction (i.e., the mixing of body and brain inputs from one or multiple users).

3.1 Research Axes

The scientific activity of Hybrid team follows three main axes of research:

  • Body-based interaction in virtual reality. Our first research axis concerns the design of immersive and effective "body-based" 3D interactions, i.e., relying on a physical engagement of the user’s body. This trend is probably the most popular one in VR research at the moment. Most VR setups make use of tracking systems which measure specific positions or actions of the user in order to interact with a virtual environment. However, in recent years, novel options have emerged for measuring “full-body” movements or other, even less conventional, inputs (e.g. body equilibrium). In this first research axis we are thus concerned by the emergence of new kinds of “body-based interaction” with virtual environments. This implies the design of novel 3D user interfaces and novel 3D interactive techniques, novel simulation models and techniques, and novel sensory feedbacks for body-based interaction with virtual worlds. It involves real-time physical simulation of complex interactive phenomena, and the design of corresponding haptic and pseudo-haptic feedback.
  • Mind-based interaction in virtual reality. Our second research axis concerns the design of immersive and effective “mind-based” 3D interactions in Virtual Reality. Mind-based interaction with virtual environments is making use of Brain-Computer Interface technology. This technology corresponds to the direct use of brain signals to send “mental commands” to an automated system such as a robot, a prosthesis, or a virtual environment. BCI is a rapidly growing area of research and several impressive prototypes are already available. However, the emergence of such a novel user input is also calling for novel and dedicated 3D user interfaces. This implies to study the extension of the mental vocabulary available for 3D interaction with VE, then the design of specific 3D interaction techniques "driven by the mind" and, last, the design of immersive sensory feedbacks that could help improving the learning of brain control in VR.
  • Hybrid and collaborative 3D interaction. Our third research axis intends to study the combination of motor and mental inputs in VR, for one or multiple users. This concerns the design of mixed systems, with potentially collaborative scenarios involving multiple users, and thus, multiple bodies and multiple brains sharing the same VE. This research axis therefore involves two interdependent topics: 1) collaborative virtual environments, and 2) hybrid interaction. It should end up with collaborative virtual environments with multiple users, and shared systems with body and mind inputs.

4 Application domains

4.1 Overview

The research program of Hybrid team aims at next generations of virtual reality and 3D user interfaces which could possibly address both the “body” and “mind” of the user. Novel interaction schemes are designed, for one or multiple users. We target better integrated systems and more compelling user experiences.

The applications of our research program correspond to the applications of virtual reality technologies which could benefit from the addition of novel body-based or mind-based interaction capabilities:

  • Industry: with training systems, virtual prototyping, or scientific visualization;
  • Medicine: with rehabilitation and reeducation systems, or surgical training simulators;
  • Entertainment: with movie industry, content customization, video games or attractions in theme parks,
  • Construction: with virtual mock-ups design and review, or historical/architectural visits.
  • Cultural Heritage: with acquisition, virtual excavation, virtual reconstruction and visualization

5 Social and environmental responsibility

5.1 Impact of research results

A salient initiative launched in 2020 and carried out by Hybrid is the Inria Covid-19 project “VERARE”. VERARE is a unique and innovative concept implemented in record time thanks to a close collaboration between the Hybrid research team and the teams from the intensive care and physical and rehabilitation medicine departments of Rennes University Hospital. VERARE consists in using virtual environments and VR technologies for the rehabilitation of Covid-19 patients, coming out of coma, weakened, and with strong difficulties in recovering walking. With VERARE, the patient is immersed in different virtual environments using a VR headset. He is represented by an “avatar”, carrying out different motor tasks involving his lower limbs, for example : walking, jogging, avoiding obstacles, etc. Our main hypothesis is that the observation of such virtual actions, and the progressive resumption of motor activity in VR, will allow a quicker start to rehabilitation, as soon as the patient leaves the ICU. The patient will then be able to carry out sessions in his room, or even from his hospital bed, in simple and secure conditions, hoping to obtain a final clinical benefit, either in terms of motor and walking recovery or in terms of hospital length of stay. The project started at the end of April 2020, and we could deploy a first version of our application at the Rennes hospital in mid-June 2020 only 2 months after the project started. Several patients have already started using our virtual reality application at the Rennes University Hospital, and the clinical evaluation of VERARE is expected to be achieved and completed in 2021.

6 Highlights of the year

  • Mélanie Cogné (Medical Doctor, PhD, CHU Rennes) has joined the Hybrid team as a permanent member.

6.1 Awards

  • 20 IEEE VR Best Journal Paper Award 2020: obtained by Rebecca Fribourg, Ferran Argelaguet, Anatole Lécuyer, Ludovic Hoyet, for the paper entitled: “Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View”.
  • 53 IEEE VR Best Conference Paper Award 2020: obtained by Hakim Si-Mohammed, Catarina Lopes-Dias, María Duarte, Ferran Argelaguet Sanz, Camille Jeunet, Géry Casiez, Gernot Müller-Putz, Anatole Lécuyer, Reinhold Scherer, for the paper entitled: “Detecting System Errors in Virtual Reality Using EEG Through Error-Related Potentials”.
  • 38 EuroVR Conference Best Paper Award 2020: obtained by Hugo Brument, Maud Marchal, Anne-Hélène Olivier, Ferran Argelaguet, for the paper entitled: “Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual Environments”.
  • 41 ICAT-EGVE Conference Best paper Award 2020: obtained by Rebecca Fribourg, Evan Blanpied, Ludovic Hoyet, Anatole Lécuyer, Ferran Argelaguet, for the paper entitled: “Influence of Threat Occurrence and Repeatability on the Sense of Embodiment and Threat Response in VR”.
  • 36 Best IEEE VR 3DUI Contest Demo Award - Honorable Mention 2020, obtained by Alexandre Audinot, Diane Dewez, Gwendal Fouché, Rebecca Fribourg, Thomas Howard, Flavien Lécuyer, Tiffany Luong, Victor Mercado, Adrien Reuzeau, Thomas Rinnert, Guillaume Vailland, and Ferran Argelaguet, for the paper entitled: “3Dexterity: Finding your place in a 3-armed world”.

7 New software and platforms

7.1 New software

7.1.1 #FIVE

  • Name: Framework for Interactive Virtual Environments
  • Keywords: Virtual reality, 3D, 3D interaction, Behavior modeling
  • Scientific Description: #FIVE (Framework for Interactive Virtual Environments) is a framework for the development of interactive and collaborative virtual environments. #FIVE was developed to answer the need for an easier and a faster design and development of virtual reality applications. #FIVE provides a toolkit that simplifies the declaration of possible actions and behaviours of objects in a VE. It also provides a toolkit that facilitates the setting and the management of collaborative interactions in a VE. It is compliant with a distribution of the VE on different setups. It also proposes guidelines to efficiently create a collaborative and interactive VE. The current implementation is in C# and comes with a Unity3D engine integration, compatible with MiddleVR framework.
  • Functional Description: #FIVE contains software modules that can be interconnected and helps in building interactive and collaborative virtual environments. The user can focus on domain-specific aspects for his/her application (industrial training, medical training, etc) thanks to #FIVE's modules. These modules can be used in a vast range of domains using virtual reality applications and requiring interactive environments and collaboration, such as in training for example.
  • URL: https://bil.inria.fr/fr/software/view/2527/tab
  • Publication: hal-01147734
  • Contacts: Florian Nouviale, Bruno Arnaldi, Valérie Gouranton
  • Participants: Florian Nouviale, Valérie Gouranton, Bruno Arnaldi, Vincent Goupil, Carl-Johan Jorgensen, Emeric Goga, Adrien Reuzeau, Alexandre Audinot

7.1.2 #SEVEN

  • Name: Sensor Effector Based Scenarios Model for Driving Collaborative Virtual Environments
  • Keywords: Virtual reality, Interactive Scenarios, 3D interaction
  • Scientific Description: #SEVEN (Sensor Effector Based Scenarios Model for Driving Collaborative Virtual Environments) is a model and an engine based on petri nets extended with sensors and effectors, enabling the description and execution of complex and interactive scenarios
  • Functional Description: #SEVEN enables the execution of complex scenarios for driving Virtual Reality applications. #SEVEN's scenarios are based on enhanced Petri net and state machine models which is able to describe and solve intricate event sequences. #SEVEN comes with an editor for creating, editing and remotely controlling and running scenarios. #SEVEN is implemented in C# and can be used as a stand-alone application or as a library. An integration to the Unity3D engine, compatible with MiddleVR, also exists.
  • Release Contributions: Adding state machine handling for scenario description in addition to the already existing petri net format. Improved scenario editor
  • URL: https://bil.inria.fr/fr/software/view/2528/tab
  • Publications: hal-01147733, hal-01199738, tel-01419698, hal-01086237
  • Contacts: Valérie Gouranton, Bruno Arnaldi, Florian Nouviale
  • Participants: Florian Nouviale, Valérie Gouranton, Bruno Arnaldi, Vincent Goupil, Emeric Goga, Carl-Johan Jorgensen, Adrien Reuzeau, Alexandre Audinot

7.1.3 OpenVIBE

  • Keywords: Neurosciences, Interaction, Virtual reality, Health, Real time, Neurofeedback, Brain-Computer Interface, EEG, 3D interaction
  • Functional Description: OpenViBE is a free and open-source software platform devoted to the design, test and use of Brain-Computer Interfaces (BCI). The platform consists of a set of software modules that can be integrated easily and efficiently to design BCI applications. The key features of OpenViBE software are its modularity, its high-performance, its portability, its multiple-users facilities and its connection with high-end/VR displays. The designer of the platform enables to build complete scenarios based on existing software modules using a dedicated graphical language and a simple Graphical User Interface (GUI). This software is available on the Inria Forge under the terms of the AGPL licence, and it was officially released in June 2009. Since then, the OpenViBE software has already been downloaded more than 60000 times, and it is used by numerous laboratories, projects, or individuals worldwide. More information, downloads, tutorials, videos, documentations are available on the OpenViBE website.
  • URL: http://openvibe.inria.fr
  • Authors: Charles Garraud, Jérôme Chabrol, Thierry Gaugry, Cedric Riou, Yann Renard, Anatole Lécuyer, Jozef Legény, Laurent Bonnet, Jussi Tapio Lindgren, Fabien Lotte, Thomas Prampart, Thibaut Monseigne
  • Contacts: Anatole Lécuyer, Ana Bela Leconte
  • Participants: Cedric Riou, Thierry Gaugry, Anatole Lécuyer, Fabien Lotte, Jussi Tapio Lindgren, Laurent Bougrain, Maureen Clerc, Théodore Papadopoulo
  • Partners: INSERM, GIPSA-Lab

7.2 New platforms

7.2.1 Immerstar

With the two virtual reality technological platforms Immersia and Immermove, grouped under the name Immerstar, the team has access to high-level scientific facilities. This equipment benefits the research teams of the center and has allowed them to extend their local, national and international collaborations. The Immerstar platform was granted by an Inria funding for the 2015-2019 period which had enabled several important evolutions. In particular, in 2018, a haptic system covering the entire volume of the Immersia platform was installed, allowing various configurations from single haptic device usage to dual haptic devices usage with either one or two users. In addition, a motion platform designed to introduce motion feedback for powered wheelchair simulations has also been incorporated (see Figure 2).

We celebrated the twentieth anniversary of the Immersia platform in November 2019 by inaugurating the new haptic equipment. We proposed scientific presentations and received 150 participants, and visits for the support services in which we received 50 persons.

Based on these support, in 2020, we participated to a PIA3-Equipex+ proposal. This proposal CONTINUUM involves 22 partner, has been succesfully evaluated and will be granted. The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding.

IMG/ScaleOneHaptic
IMG/Adapt
Figure 2: Immersia platform: (Left) “Scale-One” Haptic system for one or two users. (Right) Motion platform for a powered wheelchair simulation.

8 New results

8.1 Virtual Reality Tools and Usages

8.1.1 Do Distant or Colocated Audiences Affect User Activity in VR?

Participants: Romain Terrier, Valérie Gouranton, Bruno Arnaldi.

In this work 61 we explored the impact of distant or colocated real audiences on social inhibition through a user study in virtual reality (VR) (see Figure 3). The study investigated, in an application, the differences among two multi-user configurations (i.e., the local and distant conditions) and one control condition where the user is alone (i.e., the alone condition). In the local condition, a single user and a real audience shared the same real room. Conversely, in the distant condition, the user and the audience were separated into two different real rooms. The user performed a categorization of numbers task in VR, for which the users’ performance results (i.e., type and answering time) were extracted as subjective feelings and perceptions (i.e., perceptions of others, stress, cognitive workload, presence). The differences between the local and distant configurations were explored. Furthermore, we investigated any gender biases in the objective and subjective results. During the local and distant conditions, the presence of a real audience affected the user’s performance due to social inhibition. The users were even more influenced when the audience doesn't share the same room, despite the audience being less directly perceived in this condition.

Depiction of the different experimental conditions. From left to right, colocated, virtual environment and non-colocated.
Figure 3: Depiction of the different experimental conditions. From left to right, colocated, virtual environment and non-colocated.

This work was done in collaboration with Orange Labs Rennes and the Institute of research and technology b<>com, Cesson-Sevigne, France.

8.1.2 Scenario-based VR Framework for Product Design

Participants: Romain Terrier, Valérie Gouranton, Bruno Arnaldi.

Virtual Reality (VR) applications are promising solutions in supporting design processes across multiple domains. In complex systems (e.g. machines, cities, interior layouts), VR applications are used alongside Computer Assisted Design (CAD) systems which are (1) rigid (i.e., they lack customization), and (2) limit the design iterations. VR systems need to address these shortcomings so that they can become widespread and adaptable across design domains. We thus propose a new VR Framework 54 based on scenarios and a new generic theoretical design model to assist developers in creating versatile and personalized applications for designers. The new generic theoretical model describes the common design activities shared by many design domains, and the scenario depicts the design model to allow design iterations in VR. Through scenarios, the VR Framework enables creating customized copies of the generic design process to fulfill the needs of each design domain. The customization capability of our solution is illustrated on a use case

This work was done in collaboration with the Institute of research and technology b<>com, Cesson-Sevigne, France and the Human Design Group (HDG), Toulouse, France.

8.1.3 A Machine Learning Tool to Match 2D Drawings and 3D Objects’ Category for Populating Mockups in VR

Participants: Romain Terrier.

Virtual Environments (VE) relying on Virtual Reality (VR) can facilitate the codesign by enabling the users to create 3D mockups directly in the VE. Databases of 3D objects are helpful to populate the mockup and necessitate retrieving methods for the users. In early stages of the design process, the mockups are made up of common objects rather than variations of objects. Retrieving a 3D object in a large database can be fastidious even more in VR. Taking into account the necessity of a natural user's interaction and the necessity to populate the mockup with common 3D objects, we propose, in this work, a retrieval method based on 2D sketching in VR and machine learning. Our system is able to recognize 90 categories of objects related to VR interior design with an accuracy up to 86Image : See image C in pièce-jointe

This work was done in collaboration with the Institute of research and technology b<>com, Cesson-Sevigne, France.

8.1.4 Action sequencing in VR, a no-code Approach

Participants: Lécuyer Flavien, Adrien Reuzeau, Ronan Gaugne, Valérie Gouranton, Bruno Arnaldi.

In many domains, it is common to have procedures, with a given sequence of actions to follow. To perform such procedures, virtual reality is a helpful tool as it allows to safely place a user in a given situation as many times as needed, without risk. Indeed, learning in a real situation implies risks for both the studied object – or the patient – (e.g. badly treated injury) and the trainee (e.g. lack of danger awareness). To do this, it is necessary to integrate the procedure in the virtual environment, under the form of a scenario. Creating such a scenario is a difficult task for a domain expert, as the coding skill level needed for that is too high. Often, a developer is needed to manage the creation of the virtual content, with the drawbacks that are implied (e.g. time loss and misunderstandings).

We propose 29 a complete workflow to let the domain expert create their own scenarized content for virtual reality, without any need for coding. This workflow is divided in two steps: first, a new approach is provided to generate a scenario without any code, through the principle of creating by doing. Then, efficient methods are provided to reuse the scenario in an application in different ways, for either a human user guided by the scenario, or a virtual actor controlled by it (see Figure4).

The proposed workflow starts from the action recording to generate a straight scenario, which can then be edited to obtain a more complex sequence.
Figure 4: The proposed workflow starts from the action recording to generate a straight scenario, which can then be edited to obtain a more complex sequence.

8.1.5 Unveiling the implicit knowledge, one scenario at a time

Participants: Lécuyer Flavien, Adrien Reuzeau, Valérie Gouranton, Bruno Arnaldi.

When defining virtual reality applications with complex procedures, such as medical operations or mechanical assembly or maintenance procedures, the complexity and the variability of the procedures makes the definition of the scenario difficult and time-consuming. Indeed, the variability complicates the definition of the scenario by the experts, and its combinatories demands a comprehension effort for the developer, which is often out of reach. Additionally, the experts have a hard time explaining the procedures with a sufficient level of details, as they usually forget to mention some actions that are, in fact, important for the application.

To ease the creation of scenario, we propose 28 a complete methodology, based on (1) an iterative process composed of: (2) the recording of actions in virtual reality to create sequences of actions, and (3) the use of mathematical tools that can generate a complete scenario from a few of those sequences, with (4) graphical visualization of the scenarios and complexity indicators. This process helps the expert to determine the sequences that must be recorded to obtain a scenario with the required variability (see Figure 5).

Our method lets the expert iterate to create the scenario through several recordings
Figure 5: Our method lets the expert iterate to create the scenario through several recordings

This work was done in collaboration with the Hycomes team.

8.1.6 Vestibular Feedback on a Virtual Reality Wheelchair Driving Simulator: A Pilot Study

Participants: Vailland Guillaume, Valérie Gouranton, Bruno Arnaldi.

Autonomy and the ability to maintain social activities can be challenging for people with disabilities experiencing reduced mobility. In the case of disabilities that impact mobility, power wheelchairs can help such people retain or regain autonomy. Nonetheless, driving a power wheelchair is a complex task that requires a combination of cognitive, visual and visuo-spatial abilities. In this context, driving simulators might be efficient and promising tools to provide alternative, adaptive, flexible, and safe training. In previous work, we proposed a Virtual Reality (VR) driving simulator 56, 57 integrating vestibular feedback to simulate wheelchair motion sensations. The performance and acceptability of a VR simulator rely on satisfying user Quality of Experience (QoE). This paper presents a pilot study assessing the impact of the vestibular feedback provided on user QoE (see Figure 6). The results show that vestibular feedback activation increases SoP and decreases cybersickness.

VR simulator for wheelchair driving with vestibular feedback.
Figure 6: VR simulator for wheelchair driving with vestibular feedback.

This work was done in collaboration with the Rainbow team.

8.1.7 Introducing Mental Workload Assessment for the Design of VR Training Scenarios

Participants: Tiffany Luong, Ferran Argelaguet, Anatole Lécuyer.

Training is one of the major use cases of Virtual Reality (VR) due to the flexibility and reproducibility of VR simulations. However, the use of the user's cognitive state, and in particular mental workload (MWL), remains largely unexplored in the design of training scenarios. In this work 47, we propose to consider MWL for the design of complex training scenarios involving multiple parallel tasks in VR. The proposed approach is based on the assessment of the MWL elicited by each potential tasks configuration in the training application. Following the assessment, the resulting model is then used to create training scenarios able to modulate the user's MWL over time. This approach is illustrated by a VR flight training simulator based on the Multi-Attribute Task Battery II, able to generate 12 different tasks configurations. A first user study (N=38) was conducted to assess the MWL for each tasks configuration using self-reports and performance measurements. This assessment was then used to generate three training scenarios in order to induce different levels of MWL over time. A second user study (N=14) confirmed that the proposed approach was able to induce the expected MWL over time for each training scenario. These results pave the way to further studies exploring how MWL modulation can be used to improve VR training applications.

This work was done in collaboration with the Institute of research and technology b<>com, Cesson-Sevigne, France.

8.1.8 Towards Real-Time Recognition of Users' Mental Workload Using Integrated Physiological Sensors into a VR HMD

Participants: Tiffany Luong, Ferran Argelaguet, Anatole Lécuyer.

In this work 48 we proposed an “all-in-one” solution for the real-time recognition of users’ mental workloads in Virtual Reality (VR) through the customization of a commercial HMD with physiological sensors. First, we describe the hardware and software solution employed to build the system (see Figure 7). Second, we detail the machine learning methods used for the automatic recognition of the users' mental workload, which are based on the well-known Random Forest algorithm. In order to gather data to train the system, we conducted an extensive user study with 75 participants using a VR flight simulator to induce different levels of mental workload. With the data collected, we were able to train the system in order to classify four different levels of mental workload with an accuracy up to 65%. In addition, we discuss the role of the signal normalization procedures, the contribution of the different physiological signals on the recognition accuracy and compare the results obtained with the sensors embedded in the HMD with commercial grade systems. Taken together, our results suggest that such all-in-one approach, with physiological sensors directly embedded in the HMD, is a promising path for VR applications in which the real-time or off-line estimation of Mental Workload assessment is beneficial.

Hardware solution for the “all-in-one“ solution for the real-time recognition of users’ mental workload in VR. The sensors are placed on a Vive Pro Eye HMD, which has eye-tracking. (1) PPG sensor, (2), electrodes to assess the EDA, (3) electronic card, (4) 3D printed case.
Figure 7: Hardware solution for the “all-in-one“ solution for the real-time recognition of users’ mental workload in VR. The sensors are placed on a Vive Pro Eye HMD, which has eye-tracking. (1) PPG sensor, (2), electrodes to assess the EDA, (3) electronic card, (4) 3D printed case.

This work was done in collaboration with the Institute of research and technology b<>com, Cesson-Sevigne, France.

8.1.9 Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual Environments

Participants: Hugo Brument, Maud Marchal, Ferran Argelaguet.

The perception of rotation gains, defined as a modification of the virtual rotation with respect to the real rotation, has been widely studied to determine detection thresholds and widely applied to redirected navigation techniques. In contrast, Field of View (FoV) restrictions have been explored in virtual reality as a mitigation strategy for motion sickness, although they can alter user's perception and navigation performance in virtual environments. This work 38 explored whether the use of dynamic FoV manipulations, referred also as vignetting, could alter the perception of rotation gains during virtual rotations in virtual environments (see Figure 8). We conducted a study to estimate and compare perceptual thresholds of rotation gains while varying the vignetting type (no vignetting, horizontal and global vignetting) and the vignetting effect (luminance or blur). 24 Participants performed 60 or 90 degrees virtual rotations in a virtual forest, with different rotation gains applied. Participants had to choose whether or not the virtual rotation was greater than the physical one. Results showed that the point of subjective equality was different across the vignetting types, but not across the vignetting effect or the turns. Subjective questionnaires indicated that vignetting seems less comfortable than the baseline condition to perform the task. We discuss the applications of such results to improve the design of vignetting for redirection techniques as well as the understanding of perception of rotation gains.

Illustration of the 4 different Field of View restrictions (vignetting) during the same rightwards rotation: (a) Horizontal Luminance; (b) Global Luminance; (c) Horizontal Blur; (d) Global Blur.
Figure 8: Illustration of the 4 different Field of View restrictions (vignetting) during the same rightwards rotation: (a) Horizontal Luminance; (b) Global Luminance; (c) Horizontal Blur; (d) Global Blur.

This work was done in collaboration with the MimeTIC and Rainbow teams.

8.1.10 Does the Control Law Matter? Characterization and Evaluation of Control Laws for Virtual Steering Navigation

Participants: Hugo Brument, Maud Marchal, Ferran Argelaguet.

This work 39 aimed to investigate the influence of the control law in virtual steering techniques, and in particular the speed update, on users' behaviour while navigating in virtual environments. To this end, we first proposed to characterize existing control laws. Then, we designed a user study to evaluate the impact of the control law on users' behaviour and performance in a navigation task. Participants had to perform a virtual slalom while wearing a head-mounted display. They were following three different sinusoidal-like trajectory (with low, medium and high curvature) using a torso-steering navigation technique with three different control laws (constant, linear and adaptive). The adaptive control law, based on the biomechanics of human walking, takes into account the relation between speed and curvature. We propose a spatial and temporal analysis of the trajectories performed both in the virtual and the real environment. The results show that users' trajectories and behaviors were significantly affected by the shape of the trajectory but also by the control law. In particular, users' angular velocity was higher with constant and linear laws compared to the adaptive law. In addition, constant and linear laws generated a higher variability in linear speed, angular velocity and accelerations profiles compared to the adaptive law. The analysis of subjective feedback suggests that these differences might result in a lower perceived physical demand and effort for the adaptive control law. This work concludes discussing the potential applications of such results to improve the design and evaluation of navigation control laws.

This work was done in collaboration with the MimeTIC and Rainbow teams.

8.2 Virtual Avatars

8.2.1 Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View

Participants: Rebecca Fribourg, Ferran Argelaguet, Anatole Lécuyer.

In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This work aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment. We conducted an experiment 20 in which participants had to match a given “optimal” SoE avatar configuration (realistic avatar, full-body motion capture, first-person point of view), starting by a “minimal” SoE configuration (minimal avatar, no control, third-person point of view), by iteratively increasing the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE (see Figure 9). The work also included a baseline experiment (n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.

The four tasks implemented in the subjective matching experiment with avatars at the maximum level of appearance. From left
to right: Punching, Soccer, Fitness and Walking.
Figure 9: The four tasks implemented in the subjective matching experiment with avatars at the maximum level of appearance. From left to right: Punching, Soccer, Fitness and Walking.

This work was done in collaboration with the MimeTIC team.

8.2.2 Virtual Co-Embodiment: Evaluation of the Sense of Agency while Sharing the Control of a Virtual Body among Two Individuals

Participants: Rebecca Fribourg, Ferran Argelaguet, Anatole Lécuyer.

In this work, we introduce a concept called “virtual co-embodiment” 21, which enables a user to share their virtual avatar with another entity (e.g., another user, robot, or autonomous agent) as depicted in Figure 10. We describe a proof-of-concept in which two users can be immersed from a first-person perspective in a virtual environment and can have complementary levels of control (total, partial, or none) over a shared avatar. In addition, we conducted an experiment to investigate the influence of users' level of control over the shared avatar and prior knowledge of their actions on the users' sense of agency and motor actions. The results showed that participants are good at estimating their real level of control but significantly overestimate their sense of agency when they can anticipate the motion of the avatar. Moreover, participants performed similar body motions regardless of their real control over the avatar. The results also revealed that the internal dimension of the locus of control, which is a personality trait, is negatively correlated with the user's perceived level of control. The combined results unfold a new range of applications in the fields of virtual-reality-based training and collaborative teleoperation, where users would be able to share their virtual body.

Our “Virtual Co-Embodimen” experience enables a pair of users to be embodied simultaneously in the same virtual avatar (Left). The positions and orientations of the two users are applied to the virtual body of the avatar based on a weighted average, e.g., “User A” with 25% control
and “User B” with 75% control over the virtual body (Right).
Figure 10: Our “Virtual Co-Embodimen” experience enables a pair of users to be embodied simultaneously in the same virtual avatar (Left). The positions and orientations of the two users are applied to the virtual body of the avatar based on a weighted average, e.g., “User A” with 25% control and “User B” with 75% control over the virtual body (Right).

This work was done in collaboration with the MimeTIC team and the University of Tokyo.

8.2.3 Virtual Avatars as Children Companions For a VR-based Educational Platform: How Should They Look Like?

Participants: Jean-Marie Normand.

Virtual Reality (VR) has the potential of becoming a game changer in education, with studies showing that VR can lead to better quality of and access to education. One area that is promising, especially for young children, is the use of Virtual Companions that act as teaching assistants and support the learners' educational journey in the virtual environment. However, as it is the case in real life, the appearance of the virtual companions can be critical for the learning experience. This work studies the impact of the age, gender and general appearance (human- or robot-like) of virtual companions on 9-12 year old children 55. Our results over two experiments (n=24 and n=13) tend to show that children have a bigger sense of Spatial Presence, Engagement and Ecological Validity when interacting with a human-like Virtual Companion of the Same Age and of a Different Gender.

This work was done in collaboration with Elsa Thiaville and Anthony Ventresque, from SFI Lero & School of Computer Science, University College Dublin, Ireland and Joe Kenny from Zeeko, an Irish company.

8.2.4 Influence of Threat Occurrence and Repeatability on the Sense of Embodiment and Threat Response in VR

Participants: Rebecca Fribourg, Ferran Argelaguet, Anatole Lécuyer.

Does virtual threat harm the Virtual Reality (VR) experience? We explored the potential impact of threat occurrence (see Figure 11) and repeatability on users' Sense of Embodiment (SoE) and threat response 41. The main findings of our experiment are that the introduction of a threat does not alter users' SoE but might change their behaviour while performing a task after the threat occurrence. In addition, threat repetitions did not show any effect on users' subjective SoE, or subjective and objective responses to threat. Taken together, our results suggest that embodiment studies should expect potential change in participants behaviour while doing a task after a threat was introduced, but that threat introduction and repetition do not seem to impact the subjective measure of the SoE (user responses to questionnaires) nor the objective measure of the SoE (behavioural response to threat towards the virtual body).

Overview of the virtual environment representing a factory (left), an avatar representing a user placing an ingot on the plate
arrived on the conveyor lay (center) and the crusher threatening the user by suddenly going down while the user’s hand is under it.
Figure 11: Overview of the virtual environment representing a factory (left), an avatar representing a user placing an ingot on the plate arrived on the conveyor lay (center) and the crusher threatening the user by suddenly going down while the user’s hand is under it.

This work was done in collaboration with the MimeTIC team and Davidson College.

8.2.5 Studying the Role of Haptic Feedback on Virtual Embodiment in a Drawing Task

Participants: Grégoire Richard, Ferran Argelaguet, Anatole Lécuyer.

The role of haptic feedback on virtual embodiment is investigated in this work 34 in a context of active and fine manipulation. In particular, we explore which haptic cue, with varying ecological validity, has more influence on virtual embodiment. We conducted a within-subject experiment with 24 participants and compared self-reported embodiment over a humanoid avatar during a coloring task under three conditions: force-feedback, vibrotactile feedback, no haptic feedback. In the experiment, force-feedback was more ecological as it matched reality more closely, while vibrotactile feedback was more symbolic. Taken together, our results show significant superiority of force-feedback over no haptic feedback regarding embodiment, and significant superiority of force feedback over the other two conditions regarding subjective performance. Those results suggest that a more ecological feedback is better suited to elicit embodiment during fine manipulation tasks.

This work was done in collaboration with the Loki Inria team.

8.2.6 Studying the Inter-Relation Between Locomotion Techniques and Embodiment in Virtual Reality

Participants: Diane Dewez, Ferran Argelaguet, Anatole Lécuyer.

This work 40 explored the potential inter-relation between locomotion and embodiment by focusing on the two following questions: Does the locomotion technique have an impact on the user’s sense of embodiment? Does embodying an avatar have an impact on the user’s preference and performance depending on the locomotion technique? To address these questions, we conducted a user study with sixty participants. Three widely used locomotion techniques were evaluated: real walking, walking-in-place and virtual steering (see Figure 12). All participants performed four tasks with and without a full-body avatar. The results show that participants had a comparable sense of embodiment with all techniques when embodied in an avatar seen from first-person perspective, and that the presence or absence of the virtual avatar did not alter their performance while navigating, independently of the technique. Taken together, our results represent a first attempt to qualify the inter-relation between virtual navigation and virtual embodiment, and suggest that the 3D locomotion technique used has little influence on the user’s sense of embodiment in VR.

This work was done in collaboration with the MimeTIC team.

Illustration of three tasks performed by the participants in the study exploring the inter-relation between locomotion and virtual embodiment. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task.
Figure 12: Illustration of three tasks performed by the participants in the study exploring the inter-relation between locomotion and virtual embodiment. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task.

8.2.7 The impact of stylization on face recognition

Participants: Ferran Argelaguet, Anatole Lécuyer, Nicolas Olivier.

While digital humans are key aspects of the rapidly evolving areas of virtual reality, gaming, and online communications, many applications would benefit from using digital personalized (stylized) representations of users, as they were shown to highly increase immersion, presence and emotional response. In particular, depending on the target application, one may want to look like a dwarf or an elf in a heroic fantasy world, or like an alien on another planet, in accordance with the style of the narrative. While creating such virtual replicas requires stylization of the user’s features onto the virtual character, no formal study has however been conducted to assess the ability to recognize stylized characters. We carried-out a perceptual study investigating the effect of the degree of stylization on the ability to recognize an actor, and the subjective acceptability of stylizations 51 (see Figure 13). Results show that recognition rates decrease when the degree of stylization increases, while acceptability of the stylization increases. These results provide recommendations to achieve good compromises between stylization and recognition, and pave the way to new stylization methods providing a tradeoff between stylization and recognition of the actor.

A face with various levels of two (top and bottom) non-human stylization, from low (left) to high (right). The original human face is on the left.
Figure 13: A face with various levels of two (top and bottom) non-human stylization, from low (left) to high (right). The original human face is on the left.

This work was done in collaboration with the MimeTIC team and Interdigital.

8.2.8 3Dexterity: Finding your place in a 3-armed world

Participants: Audinot Alexandre, Dewez Diane, Fouché Gwendal, Fribourg Rebecca, Howard Thomas, Lécuyer Flavien, Luong Tiffany, Mercado Victor, Reuzeau Adrien, Rinnert Thomas, Vailland Guillaume, Argelaguet Ferran.

In the context of the IEEE VR 2020 3DUI Contest entitled “Embodiment for the Difference”, we showcased a VR application to highlight the challenges that people with physical disabilities face on their daily life. Two-armed users are placed in a world where people normally have three arms, making them effectively physically disabled. The scenario takes the user through the process of struggling with everyday interactions (designed for humans with three arms), hen receiving a third arm prosthesis and thus recovering some level of autonomy. The experience is intended to generate a sense of difference and empathy for physically disabled persons.

8.3 Augmented Reality Tools and Usages

8.3.1 Can Retinal Projection Displays Improve Spatial Perception in Augmented Reality?

Participants: Étienne Peillard, Jean-Marie Normand, Ferran Argelaguet Sanz, Guillaume Moreau, Anatole Lécuyer.

Commonly used Head Mounted Displays in Augmented Reality (AR), namely Optical See-Through displays, suffer from a main drawback: their focal lenses can only provide a fixed focal distance. Such a limitation is suspected to be one of the main factors for distance misperception in AR.

In this work 52, we studied the use of an emerging new kind of AR display to tackle such perception issues: Retinal Projection Displays (RPD). With RPDs, virtual images have no focal distance and the AR content is always in focus. We conducted the first reported experiment evaluating egocentric distance perception of observers using Retinal Projection Displays (see Figure 14). We compared the precision and accuracy of the depth estimation between real and virtual targets, displayed by either OST HMDs or RPDs. Interestingly, our results show that RPDs provide depth estimates in AR closer to real ones compared to OST HMDs. Indeed, the use of an OST device was found to lead to an overestimation of the perceived distance by 15%, whereas the distance overestimation bias dropped to 3% with RPDs. Besides, the task was reported with the same level of difficulty and no difference in precision. As such, our results shed the first light on retinal projection displays' benefits in terms of user's perception in Augmented Reality, suggesting that RPD is a promising technology for AR applications in which an accurate distance perception is required.

This work was done in collaboration with Yuta Itoh, from the Augmented Vision Laboratory of the Tokyo Institute of Technology.

Illustration of the technical setup used in  to evaluate the impact of using Retinal Projection Displays on perception in AR.
Figure 14: Illustration of the technical setup used in  to evaluate the impact of using Retinal Projection Displays on perception in AR.

8.3.2 A Unified Design & Development Framework for Mixed Interactive Systems

Participants: Guillaume Bataille, Valérie Gouranton, Bruno Arnaldi.

Mixed reality, natural user interfaces and the internet of things are complementary computing paradigms. They converge towards new forms of interactive systems named mixed interactive systems. Because of their exploding complexity, mixed interactive systems induce new challenges for designers and developers. We need new abstractions of these systems in order to describe their real-virtual interplay. We also need to break mixed interactive systems down into pieces in order to segment their complexity into comprehensible subsystems. This work 37 presents a framework to enhance the design and development of these systems. We propose a model unifying the paradigms of mixed reality, natural user interfaces and the internet of things. Our model decomposes a mixed interactive system into a graph of mixed entities. Our framework implements this model, which facilitates interactions between users, mixed reality devices and connected objects (see Figure 15). In order to demonstrate our approach, we present how designers and developers can use this framework to develop a mixed interactive system dedicated to smart building occupants.

A Unified Design & Development Framework for Mixed Interactive Systems.
Figure 15: A Unified Design & Development Framework for Mixed Interactive Systems.

This work was done in collaboration with Jérémy Lacoche, from Orange.

8.4 Haptic Feedback

8.4.1 Towards Haptic Images: A Survey on Touchscreen-Based Surface Haptics

Participants: Antoine Costes, Ferran Argelaguet, Anatole Lécuyer.

In this work 17, we propose a survey on touchscreen-based surface haptics. The development of tactile screens opens new perspectives for co-located images and haptic rendering, leading to the concept of “haptic images.” They emerge from the combination of image data, rendering hardware, and haptic perception. This enables one to perceive haptic feedback while manually exploring an image. This raises nevertheless two scientific challenges, which serve as thematic axes for this survey work (see Figure 16). Firstly, the choice of appropriate haptic data raises a number of issues about human perception, measurements, modeling and distribution. Secondly, the choice of appropriate rendering technology implies a difficult trade-off between expressiveness and usability.

This work was done in collaboration with Interdigital company.

Challenges of “Haptic Images”.
Figure 16: Challenges of “Haptic Images”.

8.4.2 Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR

Participants: Xavier de Tinguy, Maud Marchal, Anatole Lécuyer.

In this work 18, we propose an innovative approach to tracking and rendering of contacts with tangible objects in VR, compensating such relative positioning error to achieve a better visuo-haptic synchronization upon contact and preserve immersion during interaction in VR. We employ one tangible object to provide distributed haptic sensations. It is equipped with capacitive sensors to estimate the proximity of the user’s fingertips to its surface. This information is then used to retarget, prior contact, the fingertips position as obtained from a standard vision tracking system, so as to achieve better synchronization between virtual and tangible contacts(see Figure 17). The main contributions of this work can thus be summarized as follows. We proposed a novel approach for enhancing contact rendering in VR when using tangible objects, instrumenting the latter with capacitive sensors. We designed and showcaseed a sensing system and visuohaptic interaction technique enabling high contact synchronization between what users see and feel. We conducted a user study showing the capability of our combined approach vs two stand-alone state-of-the-art tracking systems (Vicon and HTC Vive) in improving the VR experience.

This work was done in collaboration with the Inria Rainbow team.

Using capacitive sensing to reach a better visuo-haptic synchronization upon contact and preserve immersion during interaction in VR.
Figure 17: Using capacitive sensing to reach a better visuo-haptic synchronization upon contact and preserve immersion during interaction in VR.

8.4.3 WeATaViX: WEarable Actuated TAngibles for VIrtualreality eXperiences

Participants: Xavier Tinguy, Thomas Howard, Maud Marchal, Anatole Lécuyer.

This work  60 focus on the design and evaluation of a wearable haptic interface for natural manipulation of tangible objects in Virtual Reality (VR). It proposes an interaction concept between encounter-type and tangible haptics. The actuated 1 degree-of-freedom interface brings a tangible object in and out of contact with a user’s palm, rendering making and breaking of contact with virtual objects, and allowing grasping and manipulation of virtual objects. Attached at the back of the users' hand with sticky layer of silicone, the lightweight device is made as unobtrusive as possible for the users (see Figure 18). Device performance tests show that changes in contact states can be rendered with delays as low as 50 ms, with additional improvements to contact synchronicity obtained through our proposed interaction technique. An exploratory user study in VR showed that our device can render compelling grasp and release interactions with static and slowly moving virtual objects, contributing to the users' immersion.

This work was done in collaboration with the Inria Rainbow team.

The WeATaViX wearable device.
Figure 18: The WeATaViX wearable device.

8.4.4 Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual Environments

Participants: Victor Mercado, Maud Marchal, Anatole Lécuyer.

In this work  33, we presented novel interaction techniques (ITs) dedicated to ETHDs (see Figure 19). The techniques aim at addressing the issues commonly presented for these devices such as limited contact areas, lags and unexpected collisions with the user. First, our work proposes a design framework based on several parameters defining the interactive process between user and ETHD (input, movement control, displacement and contact). Five techniques based on different ramifications of the design space framework were conceived, respectively named: Swipe, Drag, Clutch, Bubble and Follow. Then, a use-case scenario was designed to depict the usage of these techniques on the task of touching and coloring a wide, flat surface. Finally, a user study based on the coloring task was conducted to assess the performance and user experience for each IT. Results were in favor of Drag and Clutch techniques which are based on manual surface displacement, absolute position selection and intermittent contact interaction. Taken together our results and design methodology pave the way to the design of future ITs for ETHDs in virtual environments.

This work was done in collaboration with the Inria Rainbow team.

Our setup for interaction techniques dedicated to integrate encountered-type haptic displays in virtual environments.
Figure 19: Our setup for interaction techniques dedicated to integrate encountered-type haptic displays in virtual environments.

8.4.5 “Kapow!”: Augmenting Contacts with Real and Virtual Objects Using Stylized Visual Effects

Participants: Victor Mercado, Jean-Marie Normand, Anatole Lécuyer.

In this work  66, we proposed a set of stylized visual effects (VFX) meant to improve the sensation of contact with objects in Augmented Reality (AR). Various graphical effects have been conceived, such as virtual cracks, virtual wrinkles, or even virtual onomatopoeias inspired by comics (see Figure 20). The VFX are meant to augment the perception of contact, with either real or virtual objects, in terms of material properties or contact location for instance. These VFX can be combined with a pseudo-haptics approach to further increase the range of simulated physical properties of the touched materials. An illustrative setup based on a HoloLens headset was designed, in which our proposed VFX could be explored. The VFX appear each time a contact is detected between the user's finger and one object of the scene. Such VFX-based approach could be introduced in AR applications for which the perception and display of contact information are important.

Our visual effects to emphasize virtual contacts in AR.
Figure 20: Our visual effects to emphasize virtual contacts in AR.

8.4.6 PUMAH : Pan-tilt Ultrasound Mid-Air Haptics for larger interaction workspace in virtual reality

Participants: Thomas Howard, Maud Marchal, Anatole Lécuyer.

Abstract—Mid-air haptic interfaces are promising tools for providing tactile feedback in Virtual Reality (VR) applications, as they do not require the user to be tethered to, hold, or wear any system or device. Currently, one of the most mature solutions for providing mid-air haptic feedback is through focused ultrasound phased arrays. They modulate the phase of an array of ultrasound emitters so as to generate focused points of oscillating high pressure, which in turn elicit vibrotactile sensations when encountering a user’s skin. While these arrays feature a reasonably large vertical workspace, they are not capable of displaying stimuli far beyond their horizontal limits, severely limiting their workspace in the lateral dimensions. In this work  25, we propose an innovative low-cost solution for enlarging the workspace of focused ultrasound arrays. It features 2 degrees of freedom, rotating the array around the pan and tilt axes, thereby significantly increasing the usable workspace and enabling multi-directional feedback (see Figure 21). Our hardware tests and human subject study in an ecological VR setting show a 14-fold increase in workspace volume, with focal point repositioning speeds over 0.85m/s while delivering tactile feedback with positional accuracy below 18mm. Finally, we propose a representative use case to exemplify the potential of our system for VR applications.

This work was done in collaboration with the Inria Rainbow team.

The PUMAH setup.
Figure 21: The PUMAH setup.

8.4.7 Comparing Motion-based Versus Controller-based Pseudo-haptic Weight Sensations in VR

Participants: Anatole Lécuyer.

In this work 44, we examined whether pseudo-haptic experiences can be achieved using a game controller without motion tracking. For this purpose, we implemented a virtual hand manipulation method that uses the controller’s analog stick. We compared the method’s pseudo-haptic experience to that of the conventional approach of using a hand-held motion controller. The results suggest that our analog stick manipulation can present pseudo-weight sensations in a similar way to the conventional approach. This means that interaction designers and users can also choose to utilize analog stick manipulation for pseudo-haptic experiences, as an alternative to motion controllers.

This work was done in collaboration with Hirose Lab (Tokyo University).

8.4.8 Influence of virtual reality visual feedback on the illusion of movement induced by tendon vibration of wrist in healthy participants

Participants: Salomé Lefranc, Mathis Fleury, Mélanie Cogné, Anatole Lécuyer.

Illusion of movement induced by tendon vibration is often use in neurorehabilitation. The aim of our study 27 was to investigate which modality of visual feedback in Virtual Reality (VR) associated with tendon vibration of the wrist could induce the best illusion of movement. Thirty healthy participants tried on their wrist tendon vibration inducing illusion of movement. Three VR visual conditions was applied: a moving virtual hand (Moving condition), a static virtual hand (Static condition), or no virtual hand at all (Hidden condition) (see Figure 22). The participants had to quantify the intensity of the illusory movement on a Likert scale, the subjective degree of extension of their wrist. The Moving condition induced a higher intensity of illusion of movement and a higher sensation of wrist’s extension than the Hidden condition (p<0.001 and p<0.001 respectively) than that of the Static condition (p<0.001 and p<0.001 respectively). This study demonstrated the importance of carefully selecting a visual feedback to improve the illusion of movement induced by tendon vibration. Further work will consist in testing the same hypothesis with stroke patients.

This work was done in collaboration with CHU Rennes.

Experimental setup for studying tendon vibration illusions in VR.
Figure 22: Experimental setup for studying tendon vibration illusions in VR.

8.5 Brain-Computer Interfaces

8.5.1 A survey on the use of haptic feedback in BCI/NF

Participants: Mathis Fleury, Giulia Lioi, Anatole Lécuyer.

Neurofeedback (NF) and brain-computer interface (BCI) applications rely on the registration and real-time feedback of individual patterns of brain activity with the aim of achieving self-regulation of specific neural substrates or control of external devices. These approaches have historically employed visual stimuli. However, in some cases vision is unsuitable or inadequately engaging. Other sensory modalities, such as auditory or haptic feedback have been explored, and multisensory stimulation is expected to improve the quality of the interaction loop. Moreover, for motor imagery tasks, closing the sensorimotor loop through haptic feedback may be relevant for motor rehabilitation applications, as it can promote plasticity mechanisms. In this work 19, we survey the various haptic technologies and describes their application to BCIs and NF (see Figure 23). We identify major trends in the use of haptic interfaces for BCI and NF systems and discuss crucial aspects that could motivate further studies.

This work was done in collaboration with the Inria EMPENN team.

Integrating haptics in the Neurofeedback/BCI loop.
Figure 23: Integrating haptics in the Neurofeedback/BCI loop.

8.5.2 A Multi-Target Motor Imagery Training Using Bimodal EEG-fMRI Neurofeedback: A Pilot Study in Chronic Stroke Patients

Participants: Mathis Fleury, Giulia Lioi, Anatole Lécuyer.

Traditional rehabilitation techniques present limitations and the majority of patients show poor 1-year post-stroke recovery. Thus, Neurofeedback (NF) or Brain-Computer-Interface applications for stroke rehabilitation purposes are gaining increased attention. Indeed, NF has the potential to enhance volitional control of targeted cortical areas and thus impact on motor function recovery. However, current implementations are limited by temporal, spatial or practical constraints of the specific imaging modality used. In this pilot work 30 and for the first time in literature, we applied bimodal EEG-fMRI NF for upper limb stroke recovery on four stroke-patients with different stroke characteristics and motor impairment severity. We also propose a novel, multi-target training approach that guides the training towards the activation of the ipsilesional primary motor cortex (see Figure 24). In addition to fMRI and EEG outcomes, we assess the integrity of the corticospinal tract (CST) with tractography. Preliminary results suggest the feasibility of our approach and show its potential to induce an augmented activation of ipsilesional motor areas, depending on the severity of the stroke deficit. Only the two patients with a preserved CST and subcortical lesions succeeded in upregulating the ipsilesional primary motor cortex and exhibited a functional improvement of upper limb motricity. These findings highlight the importance of taking into account the variability of the stroke patients’ population and enabled to identify inclusion criteria for the design of future clinical studies.

This work was done in collaboration with the Inria EMPENN team and CHU Rennes.

Results of our pilot study on Neurofeedback for stroke patients.
Figure 24: Results of our pilot study on Neurofeedback for stroke patients.

8.5.3 Impact of 1D and 2D visualisation on EEG-fMRI neurofeedback training during a motor imagery task

Participants: Mathis Fleury, Giulia Lioi, Anatole Lécuyer.

Bi-modal EEG-fMRI neurofeedback (NF) is of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant’s brain activity; Second, it has potential to better understand the link and the synergy of the two modalities (EEG-fMRI). There are however different ways to present to the participant his NF scores during bi-modal neurofeedback sessions. In this work 58, we investigate the impact of the use of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better coherence between EEG and fMRI when 2D display is used and subjects can separately regulate EEG and fMRI. Subjects

This work was done in collaboration with the Inria EMPENN team.

8.5.4 Simultaneous EEG-fMRI during a neurofeedback task, a brain imaging dataset for multimodal data integration

Participants: Mathis Fleury, Giulia Lioi, Anatole Lécuyer.

Combining EEG and fMRI allows for integration of fine spatial and accurate temporal resolution yet presents numerous challenges, noticeably if performed in real-time to implement a Neurofeedback (NF) loop. Here we describe a multimodal dataset of EEG and fMRI acquired simultaneously during a motor imagery NF task, supplemented with MRI structural data. The study involved 30 healthy volunteers undergoing five training sessions. We showed the potential and merit of simultaneous EEG-fMRI NF in previous work. In this work 31 we illustrate the type of information that can be extracted from this dataset and show its potential use. This represents one of the first simultaneous recording of EEG and fMRI for NF and here we present the first open access bi-modal NF dataset integrating EEG and fMRI. We believe that it will be a valuable tool to (1) advance and test methodologies for multi-modal data integration, (2) improve the quality of NF provided, (3) improve methodologies for de-noising EEG acquired under MRI and (4) investigate the neuromarkers of motor-imagery using multi-modal information.

This work was done in collaboration with the Inria EMPENN team.

8.5.5 Uncovering EEG Correlates of Covert Attention in Soccer Goalkeepers: Towards Innovative Sport Training Procedures

Participants: Ferran Argelaguet, Anatole Lécuyer.

Advances in sports sciences and neurosciences offer new opportunities to design efficient and motivating sport training tools. For instance, using NeuroFeedback (NF), athletes can learn to self-regulate specific brain rhythms and consequently improve their performances. In this work 26, we focused on soccer goalkeepers’ Covert Visual Spatial Attention (CVSA) abilities, which are essential for these athletes to reach high performances. We looked for Electroencephalography (EEG) markers of CVSA usable for virtual reality-based NF training procedures, i.e., markers that comply with the following criteria: (1) specific to CVSA, (2) detectable in real-time and (3) related to goalkeepers’ performance/expertise. Our results revealed that the best-known EEG marker of CVSA—increased α-power ipsilateral to the attended hemi-field— was not usable since it did not comply with criteria 2 and 3. Nonetheless, we highlighted a significant positive correlation between athletes’ improvement in CVSA abilities and the increase of their α-power at rest. While the specificity of this marker remains to be demonstrated, it complied with both criteria 2 and 3. This result suggests that it may be possible to design innovative ecological training procedures for goalkeepers, for instance using a combination of NF and cognitive tasks performed in virtual reality.

This work was done in collaboration with the MimeTIC team and the EPFL.

8.5.6 Detecting System Errors in Virtual Reality Using EEG Through Error-Related Potentials

Participants: Hakim Si Mohammed, Ferran Argelaguet, Anatole Lécuyer.

When persons interact with the environment and experience or witness an error (e.g. an unexpected event), a specific brain pattern, known as error-related potential (ErrP) can be observed in the electroencephalographic signals (EEG). Virtual Reality technology enables users to interact with computer-generated simulated environments and to provide multi-modal sensory feedback. Using VR systems can, however, be error-prone. In this work 53, we investigate the presence of ErrPs when Virtual Reality users face 3 types of visualization errors: (Te) tracking errors when manipulating virtual objects, (Fe) feedback errors, and (Be) background anomalies. We conducted an experiment in which 15 participants (see Figure 25) were exposed to the 3 types of errors while performing a center-out pick and place task in virtual reality. The results showed that tracking errors generate error-related potentials, the other types of errors did not generate such discernible patterns. In addition, we show that it is possible to detect the ErrPs generated by tracking losses in single trial, with an accuracy of 85% . This constitutes a first step towards the automatic detection of error-related potentials in VR applications, paving the way to the design of adaptive and self-corrective VR/AR applications by exploiting information directly from the user’s brain.

This work was done in collaboration with TU Graz.

Participant equipped with both EEG and VR headsets for our study on error potentials.
Figure 25: Participant equipped with both EEG and VR headsets for our study on error potentials.

8.6 Cultural Heritage

8.6.1 From the engraved tablet to the digital tablet, history of a fifteenth century music score

Participants: Ronan Gaugne, Valérie Gouranton.

During an important archaeologicial excavation in the center of the city of Rennes, a 15th century engraved tablet was discovered in the area of a former convent. The tablet is covered with engraved inscriptions on both sides, and includes a musical score.

Different digitization techniques were used in order to study and valorize the tablet (see Figure 26). Digitization allowed for an advanced analysis of the inscriptions, and to generate a complete and precise 3D model of the artifact which was used to produce an interactive application deployed both on tactile tablets and website 24. The interactive application integrates a musical interpretation of the score that gives access to a testimony of intangible heritage. This interdisciplinary work gathered archaeologists, researchers from computer science and physics, and a professional musician.

This work was done in collaboration with Inrap, France.

Digitization of the tablet and visualisation of the interactive tablet.
Figure 26: Digitization of the tablet and visualisation of the interactive tablet.

8.6.2 Reconstruction of life aboard an East India Company ship in the 18th century

Participants: Ronan Gaugne, Valérie Gouranton.

Historical reconstitutions based on new technologies such as virtual reality tend to develop. However, most of these reconstructions offer frozen universes, emptied of human activity. As part of a collaboration between a research department in History, a computer science research institute and the 3D production platform of the West Digital Conservatory of Archaeological Heritage 67, we are offering a reconstruction of activities of the life aboard the East India Company ship Le Boullongne, based on motion capture. A first work, carried out in 2014, made it possible to produce a functional and interactive simulation of the Boullongne in virtual reality 68. This reconstruction already included a few sailors and animated virtual animals, but without significant activity. In this new phase 15, about ten scenes from everyday life were identified through the study of historical documents and engravings. These scenes were scripted and acted out by historians involved in the project, then captured and processed by computer researchers. Digital sailors have been modeled by the West Digital Conservatory of Archaeological Heritage, in collaboration with historians. Several scenes were finally integrated into the virtual Boullongne, thus giving the reconstruction a human dimension for this testimony of sailors' life aboard on 18th century ships (see Figure 27). This work was done in collaboration with EPI Mimetic, Rennes, France, UMR TEMOS (Maritime History) in Université Bretagne Sud, Lorient, France and UMR CReAAH (Archaeology), Univ Rennes, France.

Mocap of onboard activity with Historians, and restitution in the simulation of the 18th century ship.
Figure 27: Mocap of onboard activity with Historians, and restitution in the simulation of the 18th century ship.

8.6.3 3D Sketching of the Fortified Entrance of the Citadel of Aleppo from a Few Sightseeing Photos

Participants: Ronan Gaugne.

Originally built during the Ayyubid era by the son of Saladin, al-Malik al-Zahir Ghazi (1186–1216), and rebuilt throughout the Mamluk era (1260–1516), the entrance to the citadel of Aleppo was particularly affected by an earthquake in 1822, bombings during the Battle of Aleppo in August 2012, and a collapse of ramparts due to an explosion in July 2015. Even if compared to other Syrian sites, there are still enough vestiges to grasp the initial architecture, the civil war situation makes extremely difficult any “classic” process of digitization by photogrammetry or laser scanning. On this basis, we propose a process to produce a 3D model “as relevant as possible” only from a few sightseeing photographs 59. This process combines fast 3D sketching by photogrammetry, 3D modeling, texture mapping and relies on a corpus based on pictures available on the net. Furthermore, it has the advantage to be applicable to destroyed monuments if sufficient pictures are available. Five photos taken in 2005 by a tourist archaeologist around the entrance were first used to generate a partial and poor quality point cloud with photogrammetry. The main elements of the inner gate and a part of the arched bridge are distinguishable on the point. Because the architecture is fairly rectilinear and symmetrical, it has been possible to redraw in 3D most of the outlines by constantly comparing with what is visible on these first photos. The next step is the enrichment of the 3D model from the initial geometric basis and thanks to a corpus of photos available on the internet. This corpus was constituted from selection of pictures obtained with a search on Google Web Search and the keywords “Citadel” and “Aleppo”. The selection took into account both the resolution of the images and the coverage of the items of interest and gathered 66 pictures. The enrichment of the 3D model is performed through an iterative process made up of four main steps: (i) orthophoto extraction from some photos of the corpus (ii) 3D modeling from these orthophotos (iii) seamless texture extraction (iv) texture mapping. There are still some uncovered lateral areas, unreadable engraved wall writings, and some details are reconstructed naively, but the essential items, allowing to visually characterize the fortified entrance as a whole, have been reconstituted. The 3D model was first used to produce some renderings intended to obtain first reviews from archaeologists and architecture specialists, photos and complementary documents allowing correcting and filling the gaps. We wish to set a collaborative process to improve the model, based on an exchange with experts of the domain. The resulting model aims at feeding an interactive website dedicated to 3D display of heritage under threats. Other rendering of the model such as virtual reality or 3D printing could also be considered to share this testimony of our heritage. The application of this methodology to other sites deserves further studies that would depend on the possibilities of photogrammetry, the architectural complexities and human means for 3D modeling.

This work was done in collaboration with UMR CReAAH.

8.6.4 Creative Harmony

Participants: Ronan Gaugne, Valérie Gouranton.

"Creative Harmony" is a networked virtual reality artwork, inviting spectators from two different cities (Rennes, France, and Linz, Austria) to co-create a virtual environment in real time through gestures. With interaction through body motion, each participant is led to create the landscape of a marine universe, whether on the surface of the water or in the abyss. Through letting go, music and virtual dancers, spectators are able to express themselves with their bodies and connect with each other, virtually and spiritually, to evolve the world in which they find themselves immersed. The artwork was presented at the international Digital Art Festival Ars Electronica, through four performances sessions during two days.

During the performance, on the one hand, at Ars Electronica Deep Space https://ars.electronica.art/center/en/exhibitions/deepspace/ in Linz (Austria), spectators evolved on a seabed in a calm and peaceful atmosphere, and on the other hand, in Rennes in the IMMERSIA virtual reality platform http://www.irisa.fr/immersia/, a dancer accompanied by a musician interacted on the surface of the sea with auroral light (see Figure 28). These 2 virtual environments were networked to create interactivity in real time, between distant people who interacted and responded to each other through their bodies and movements to create this collaborative digital experience. Through this artwork, three axes are highlighted: connecting with nature, connecting with oneself and connecting with others. “CREATIVE HARMONY” aims to promote the importance of the relationship with each of these 3 axes.

This work was done in collaboration with the departments of Art of Univ. Rennes 2 and Univ. Paris 8, France.

The Creative Harmony performance in Arc Electronica Center (left) and in Immersia (right).
Figure 28: The Creative Harmony performance in Arc Electronica Center (left) and in Immersia (right).

9 Bilateral contracts and grants with industry

9.1 Grants with Industry

Orange Labs

Participants: Guillaume Bataille, Bruno Arnaldi, Valérie Gouranton.

This grant started in October 2017 and finished in November 2020. It supports Guillaume Bataille's PhD program with Orange Labs company on "Natural Interactions with IoT using VR/AR".

In the context of this collaboration the following patents have been filled:

  • Guillaume Bataille. Enhanced feedback of a user hands detection in mixed reality with Hololens. PCT/FR2020/050438 filed (international) in March 2020.
  • Guillaume Bataille, Bruno Arnaldi, and Valérie Gouranton. Design-Oriented Mixed-Reality Internal Model (DOMIM). PCT/EP2020/056209 filed in March 2020.
  • Guillaume Bataille, Bruno Arnaldi, and Valérie Gouranton. Virtual and tangible hybrid interactions. FR2001408 Filed (France) in Feb. 2020. PCT/EP2020/057357 Filed (international) in March 2020.
InterDigital

Participants: Nicolas Olivier, Ferran Argelaguet, Anatole Lécuyer.

This grant started in February 2019. It supports Nicolas's Olivier CIFRE PhD program with InterDigital company on "Avatar Stilization". This PhD is co-supervised with the MimeTIC team.

Orange Labs

Participants: Lysa Gramolil, Bruno Arnaldi, Valérie Gouranton.

This grant started in October 2020. It supports Lysa Gramoli's CIFRE PhD program with Orange Labs company on "Simulation of autonomous agents in connected virtual environments".

Sogea Bretagne

Participants: Vincent Goupil, Bruno Arnaldi, Valérie Gouranton.

This grant started in October 2020. It supports Vincent Goupil's CIFRE PhD program with Orange Labs company on "Hospital 2.0: Generation of Virtual Reality Applications by BIM Extraction".

10 Partnerships and cooperations

10.1 International initiatives

10.1.1 ANR-FRQSC INTROSPECT

Participants: Valérie Gouranton, Bruno Arnaldi, Ronan Gaugne, Flavien Lécuyer, Adrien Reuzeau.

INTROSPECT is a 3-year project funded by French ANR and "Fonds de Recherche Société et Culture" (FRQSC) from Quebec region, Canada. This international collaboration involves researchers in computer science and archeology from France and Canada: Hybrid (Inria-IRISA), CReAAH, Inrap, company Image ET, University Laval and INRS-ETE. INTROSPECT aims to develop new uses and tools for archaeologists that facilitate access to knowledge through interactive numerical introspection methods that combine computed tomography with 3D visualization technologies, such as Virtual Reality, tangible interactions and 3D printing. The scientific core of the project is the systematization of the relationship between the artefact, the archaeological context, the digital object and the virtual reconstruction of the archaeological context that represents it and its tangible double resulting from the 3D printing. This axiomatization of its innovative methods makes it possible to enhance our research on our heritage and to make use of accessible digital means of dissemination. This approach changes from traditional methods and applies to specific archaeological problems. Several case studies will be studied in various archaeological contexts on both sides of the Atlantic. Quebec museums are also partners in the project to spread the results among the general public.

10.1.2 Informal international partners

  • Dr. Takuji Narumi and Prof. Michitaka Hirose from University of Tokyo (Japan), on “Virtual Embodiment”
  • Dr. Gerd Bruder from University of Central Florida (USA), on “Virtual Navigation”
  • Prof. Gudrun Klinker from Technical University of Munich (Germany), on “Augmented Reality”

10.2 European initiatives

10.2.1 FP7 & H2020 Projects

TACTILITY

Participants: Ferran Argelaguet, Anatole Lécuyer, Panagiotis Kourtesis, Sebastian Vizcay.

  • Title: Tactility
  • Programm: H2020 - ICT 25
  • Duration: July 2019 - June 2022
  • Coordinator: Fundación Tecnalia Research and Innovation (Spain)
  • Partners: Aalborg University (Netherlands), Universita Degli Studi di Genova (Itali), Tecnalia Servia (Servia), Universitat de Valencia (Spain), Manus Machinae B.V. (Netherlands), Smartex S.R.L (Italy), Immersion (France)
  • Inria contact: Ferran Argelaguet
  • Abstract: TACTILITY is a multidisciplinary innovation and research action with the overall aim of including rich and meaningful tactile information into the novel interaction systems through technology for closed-loop tactile interaction with virtual environments. By mimicking the characteristics of the natural tactile feedback, it will substantially increase the quality of immersive VR experience used locally or remotely (tele-manipulation). The approach is based on transcutaneous electro-tactile stimulation delivered through electrical pulses with high resolution spatio-temporal distribution. To achieve it, significant development of technologies for transcutaneous stimulation, textile-based multi-pad electrodes and tactile sensation electronic skin, coupled with ground-breaking research of perception of elicited tactile sensations in VR, is needed. The key novelty is in the combination of: 1) the ground-breaking research of perception of electrotactile stimuli for the identification of the stimulation parameters and methods that evoke natural like tactile sensations, 2) the advanced hardware, that will integrate the novel high-resolution electrotactile stimulation system and state of the art artificial electronic skin patches with smart textile technologies and VR control devices in a wearable mobile system, and 3) the novel firmware, that handles real-time encoding and transmission of tactile information from virtual objects in VR, as well as from the distant tactile sensors (artificial skins) placed on robotic or human hands. Proposed research and innovation action would result in a next generation of interactive systems with higher quality experience for both local and remote (e.g., tele-manipulation) applications. Ultimately, TACTILITY will enable high fidelity experience through low-cost, user friendly, wearable and mobile technology.
H-REALITY

Participants: Anatole Lécuyer, Xavier Tinguy, Thomas Howard.

  • Title: H-REALITY
  • Programm: H2020 - Fet Open
  • Duration: 2018 - 2021
  • Coordinator: Univ. Birmingham (UK)
  • Partners: CNRS (France), TU Delft (Netherlands), ACTRONIKA (France), ULTRALEAP (UK)
  • Inria contact: Anatole Lécuyer
  • Abstract: The vision of H-REALITY is to be the first to imbue virtual objects with a physical presence, providing a revolutionary, untethered, virtual-haptic reality: H-Reality. This ambition will be achieved by integrating the commercial pioneers of ultrasonic “non-contact” haptics, state-of-the-art vibrotactile actuators, novel mathematical and tribological modelling of the skin and mechanics of touch, and experts in the psychophysical rendering of sensation. The result will be a sensory experience where digital 3D shapes and textures are made manifest in real space via modulated, focused, ultrasound, ready for the unteathered hand to feel, where next-generation wearable haptic rings provide directional vibrotactile stimulation, informing users of an object's dynamics, and where computational renderings of specific materials can be distinguished via their surface properties. The implications of this technology will transform online interactions; dangerous machinery will be operated virtually from the safety of the home, and surgeons will hone their skills on thin air.

10.2.2 Collaborations in European programs, except FP7 and H2020

Interreg ADAPT

Participants: Valérie Gouranton, Bruno Arnaldi, Ronan Gaugne, Florian Nouviale, Alexandre Audinot, Guillaume Vailland.

  • Program: Interreg VA France (Channel) England
  • Project acronym: ADAPT
  • Project title:Assistive Devices for empowering disAbled People through robotic Technologies
  • Duration: 01/2017 - 06/2021
  • Coordinator: ESIGELEC/IRSEEM Rouen
  • Other partners: INSA Rennes - IRISA, LGCGM, IETR (France), Université de Picardie Jules Verne - MIS (France), Pôle Saint Hélier (France), CHU Rouen (France), Réseau Breizh PC (France), Ergovie (France), Pôle TES (France), University College of London - Aspire CREATE (UK), University of Kent (UK), East Kent Hospitals Univ NHS Found. Trust (UK), Health and Europe Centre (UK), Plymouth Hospitals NHS Trust (UK), Canterbury Christ Church University (UK), Kent Surrey Sussex Academic Health Science Network (UK), Cornwall Mobility Center (UK).
  • Inria contact: Valérie Gouranton
  • Abstract: The ADAPT project aims to develop innovative assistive technologies in order to support the autonomy and to enhance the mobility of power wheelchair users with severe physical/cognitive disabilities. In particular, the objective is to design and evaluate a power wheelchair simulator as well as to design a multi-layer driving assistance system.

Collaboration with Rainbow team.

10.3 National initiatives

10.3.1 ANR

ANR LOBBY-BOT

Participants: Anatole Lécuyer, Victor Mercado.

LOBBY-BOT is a 4-year project (2017-2021) funded by the French National Research Agency (ANR). The partners are : Inria Rennes (Hybrid), CLARTE (Coordinator), RENAULT, and LS2N. The objective of LOBBY-BOT is to address the scientific challenges of encountered-type haptic devices (ETHD), which are an alternative category of haptic devices relying on a mobile physical prop, usually actuated by a robot, that constantly follows the user hand, and encounter it only when needed. The project follows two research axes: a first one dealing with robot control, and the second one dealing with interaction techniques adapted to ETHD. The involvement of Hybrid relates to the second research axis of the project. The final project prototype will be used to assess the benefits of ETHD when used in an industrial use-case : the perceived quality in an automotive interior.

ANR GRASP-IT

Participants: Anatole Lécuyer, Mélanie Cogné, Salomé Lefranc.

GRASP-IT is a 4-year project (2020-2024) funded by the French National Research Agency (ANR). The partners are : Inria Rennes (Hybrid), LORIA (Coordinator), PErSEUs, CHU Rennes, CHU Toulouse, Inria Sophia, IRR UGECAM-N, and Alchimies. The GRASP-IT project aims to recover upper limb control improving the kinesthetic motor imagery (KMI) generation of post-stroke patients using a tangible and haptic interface within a gamified Brain-Computer Interface (BCI) training environment. This innovative KMI-based BCI will integrate complementary modalities of interactions such as tangible and haptic interactions in a 3D printable flexible orthosis. We propose to design and test usability (including efficacy towards the stimulation of the motor cortex) and acceptability of this multimodal BCI. The GRASP-IT project also proposes to design and integrate a gamified non-immersive virtual environment to interact with. This multimodal solution should provide a more meaningful, engaging and compelling stroke rehabilitation training program based on KMI production. In the end, the project will integrate and evaluate neurofeedbacks, within the gamified multimodal BCI in an ambitious clinical evaluation with 75 hemiplegic patients in 3 different rehabilitation centers in France.

10.3.2 Inria projects

Inria Challenge AVATAR

Participants: Anatole Lécuyer, Ferran Argelaguet, Diane Dewez, Rebecca Fribourg.

AVATAR is a 4-year "Inria Project Lab" initiative (2018-2022) funded by Inria for supporting a national research effort on Avatars and Virtual Embodiment. This joint lab involves several Inria teams: Hybrid, Potioc, Loki, Mimetic, Graphdeco, Morpheo; as well as external partners: Univ. Bacelona, Faurecia and Technicolor companies. This project aims at improving several aspects of Avatars in immersive applications: reconstruction, animation, rendering, interaction, multi-sensory feedback, etc.

Inria Challenge NAVISCOPE

Participants: Ferran Argelaguet, Gwendal Fouché.

NAVISCOPE is a 4-year "Inria Project Lab" initiative (2018-2022) funded by Inria for supporting a national research effort on image-guided navigation and visualization of large data sets in live cell imaging and microscopy. This joint lab involves several Inria teams: Serpico, Aviz, Beagle, Hybrid, Mosaic, Parietal, Morpheme; as well as external partners: INRA and Institute Curie. This project aims at improving visualization and machine learning methods in order to provide systems capable to assist the scientist to obtain a better understanding of massive amounts of information.

Inria Covid-19 Project VERARE

Participants: Mélanie Cogné, Anatole Lécuyer, Justine Saint-Aubert, Valérie Gouranton, Ferran Argelaguet, Florian Nouviale, Ronan Gaugne.

VERARE (Virtual Environments for Rehabilitation After REsuscitation) is a 2-year research project funded by Inria (specific Inria Covid-19 call for projects) for assessing the efficacy of using Virtual Reality for motor rehabilitation (improving walk recovery) after resuscitation. This ambitious clinical project gathers Hybrid team, federating 18 members of the team, and the University Hospital of Rennes (Physical and Rehabilitation Medicine Unit and Intensive Care Units).

10.4 Regional initiatives

CHU Rennes Project HANDS

Participants: Mélanie Cogné, Anatole Lécuyer, Salomé Le Franc.

HANDS (HAptic Neurofeedback Design for Stroke) is a 3-year project funded by the University Hospital of Rennes (CHU Rennes) for assessing the influence of the association of haptic feedback and virtual reality during Neurofeedback for improving upper limb motor function after stroke. This project covers the PhD grant of Salomé Lefranc.

INCR Project ARIADE

Participants: Mélanie Cogné, Guillaume Moreau, Léa Pillette, Anatole Lécuyer.

ARIADE (Augmented Reality for Improving Navigation in Dementia) is a 3-year research project funded by the INCR (Institut des Neurosciences Cliniques de Rennes) and the CORECT of the University Hospital of Rennes for assessing the acceptability, efficacy and tolerance of visual cues administered using augmented reality for improving spatial navigation of participants with Alzheimer's disease who suffer from a topographical disorientation. This clinical project involves Hybrid, Empenn, Ecole Centrale de Nantes, and the University Hospital of Rennes (Physical and Rehabilitation Medicine Unit and Neurological Unit).

IRT b<>com

Participants: Bruno Arnaldi, Ferran Argelaguet, Valérie Gouranton, Anatole Lécuyer, Maud Marchal, Florian Nouviale.

b<>com is a French Institute of Research and Technology (IRT). The main goal of this IRT is to fasten the development and marketing of tools, products and services in the field of digital technologies. Our team has been regularly involved in collaborations with b<>com within various 3-year projects, such as ImData (on Immersive Interaction) and GestChir (on Augmented Healthcare) which both ended in 2016. Follow- up projects called NeedleWare (on Augmented Healthcare) and VUXIA (on Human Factors) have started respectively in 2016 and 2018.

CNPAO Project

Participants: Valérie Gouranton, Ronan Gaugne.

CNPAO ("Conservatoire Numérique du Patrimoine Archéologique de l’Ouest") is an on-going research project partially funded by the Université Européenne de Bretagne (UEB) and Université de Rennes 1. It involves IRISA/Hybrid and CReAAH. The main objectives are: (i) a sustainable and centralized archiving of 2D/3D data produced by the archaeological community, (ii) a free access to metadata, (iii) a secure access to data for the different actors involved in scientific projects, and (iv) the support and advice for these actors in the 3D data production and exploration through the latest digital technologies, modeling tools and virtual reality systems.

11 Dissemination

11.1 Promoting scientific activities

11.1.1 Scientific events: organisation

Member of the organizing committees
  • Anatole Lécuyer was co-organizer of the "Mixed Reality & Brain-Computer Interfaces" Workshop at IEEE ISMAR 2020.
Chair of conference program committees
  • Ferran Argelaguet was Program Chair of the IEEE VR 2020 conference track and ICAT-EGVE Conference 2020.
Member of the conference program committees
  • Valérie Gouranton was member of the International Program Committee of HUCAPP 2020
  • Guillaume Moreau was member of the International Program Committee of IEEE ISMAR 2020 and Chair of ISMAR 2020 Doctoral Consortium.
  • Jean-Marie Normand was member of the International Program Committee of the IEEE VR 2020 conference track, of the GRAPP 2020 conference, of Augmented Humans and was Special Track Chair of the Virtual and Augmented Reality track of the 2020 IEEE Conference on Games.
Reviewer
  • Anatole Lécuyer was reviewer for IEEE VR 2020.
  • Ferran Argelaguet was reviewer for IEEE ISMAR 2020, IEEE VR 2020, ACM CHI 2020, ACM VRST 2020, Siggraph ASIA XR, Eurohaptics 2020.
  • Valérie Gouranton was reviewer for HUCAPP 2020
  • Guillaume Moreau was reviewer for IEEE ISMAR 2020, IEEE VR 2020.
  • Jean-Marie Normand was reviewer for IEEE VR 2020, IEEE ISMAR 2020, ACM CHI 2020, AH 2020, IEEE CoG 2020.

11.1.2 Journal

Member of the editorial boards
  • Anatole Lécuyer is Associate Editor of the IEEE Transactions on Visualization and Computer Graphics, Frontiers in Virtual Reality, and Presence journals.
  • Valérie Gouranton is Review Editor of Frontiers in Virtual Reality.
  • Ferran Argelaguet is Review Editor of Frontiers in Virtual Reality.
  • Guillaume Moreau is Review Editor of Frontiers in Virtual Reality.
  • Jean-Marie Normand is Review Editor of Frontiers in Virtual Reality.
Reviewer - reviewing activities
  • Ferran Argelaguet was reviewer for IEEE Transactions on Visualizations and Computer Graphics, Computer Animation and Virtual Worlds, Computers & Graphics, IEEE Transactions on Human-Machine Systems.
  • Valérie Gouranton was reviewer for TVCG 2020, MDPI 2020

11.1.3 Invited talks

  • Anatole Lécuyer was keynote speaker at CHIST-ERA 2020, and invited speaker at Empathic Computing Lab (Australia), “Académie de Médecine”, “Journées Nouvelles Imageries”, “Colloque CNRS/Académies Interfaces Cerveau-Machine”, “Matinales Rennes Atalante”.
  • Guillaume Moreau was invited speaker at the Approche congress (new technologies for rehabilitation of handicapped people): perception issues in augmented reality.
  • Jean-Marie Normand and Guillaume Moreau were invited speakers at the CREPS Pays de la Loire to present AR/VR potential and issues for sports.

11.1.4 Leadership within the scientific community

  • Anatole Lécuyer is Member of the Steering Committee of the IEEE VR Conference, and Member of the Scientific Board of INCR (“Institut des Neurosciences Cliniques de Rennes”).
  • Ronan Gaugne is Member of the Selection and Validation Committee for the French cluster “Pôle Images et Réseaux”, and of the Consortium 3D of TGIR HumaNum.
  • Valérie Gouranton is Member of the Executive Committee of the AFRV (French Association for Virtual Reality), and of the Consortium 3D of TGIR HumaNum.
  • Guillaume Moreau is Member of the Steering Committee of the IEEE ISMAR Conference.

11.1.5 Scientific expertise

  • Anatole Lécuyer was expert for “Institut Cognition Bordeaux”, and “Institut des Neurosciences Cliniques de Rennes”
  • Guillaume Moreau is member of the ANSES working group on VR/AR, he was member of the HCERES evaluation committee for SIGMA Engineering School in January 2020

11.1.6 Research administration

  • Valérie Gouranton is Head of cross-cutting Axis “Art, Heritage & Culture” at IRISA UMR 6074, elected member of the IRISA laboratory council and she is a member of the Conseil National des Universités 27th section (computer science).
  • Guillaume Moreau is Deputy Dean for Research and Innovation at IMT Atlantique since Dec. 2020

11.2 Teaching - Supervision - Juries

11.2.1 Teaching

Anatole Lécuyer:

  • Master AI-ViC: “Haptic Interaction and Brain-Computer Interfaces”, 6h, M2, Ecole Polytechnique, FR
  • Master MNRV: “Haptic Interaction”, 9h, M2, ENSAM, Laval, FR
  • Master SIBM: “Haptic and Brain-Computer Interfaces”, 4.5h, M2, University of Rennes 1, FR
  • Master CN: “Haptic Interaction and Brain-Computer Interfaces”, 9h, M1 and M2, University of Rennes 2, FR
  • Master SIF: “Pseudo-Haptics and Brain-Computer Interfaces”, 6h, M2, INSA Rennes, FR

Bruno Arnaldi:

  • Master SIF: “VRI: Virtual Reality and Multi-Sensory Interaction Course”, 4h, M2, INSA Rennes, FR
  • Master INSA Rennes: “CG: Computer Graphics”, 12h, M2, INSA Rennes, FR
  • Master INSA Rennes: “Virtual Reality”, courses 6h, projects 16h, M1 and M2, INSA Rennes, FR
  • Master INSA Rennes: Projects on “Virtual Reality”, 50h, M1, INSA Rennes, FR

Ferran Argelaguet:

  • Master STS Informatique: “Techniques d'Interaction Avancées”, 26h, M2, ISTIC, University of Rennes 1, FR
  • Master SIF: “Virtual Reality and Multi-Sensory Interaction”, 8h, M2, INSA Rennes, FR
  • Master SIF: “Data Mining and Visualization”, 2h, M2, University of Rennes 1, FR
  • Master AI-ViC: “Virtual Reality and 3D Interaction”, 3h, M2, École Polytechnique, FR

Valérie Gouranton:

  • Licence: Project on “Virtual Reality”, 16h, L3 and responsible of this lecture, INSA Rennes, FR
  • Master INSA Rennes: “Virtual Reality”, 13h, M2, INSA Rennes, FR
  • Master INSA Rennes: Projects on “Virtual Reality”, 50h, M1, INSA Rennes, FR
  • Master CN: “Virtual Reality”, 5h, M1, University of Rennes 2, FR

Ronan Gaugne:

  • INSA Rennes: Projects on “Virtual Reality”, 24h, L3, Insa Rennes, FR
  • Master Digital Creation: “Virtual Reality”, 6h, M1, University of Rennes 2, FR

Guillaume Moreau:

  • Virtual Reality Major, “C++ Programming for VR”, 30h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Fundamentals of Virtual Reality”, 6h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Computer Graphics”, 4h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Advanced Software Development”, 20h, M1/M2, École Centrale de Nantes, FR
  • Computer Science Major, “Discrete Mathematics”, 10h, M1/M2, École Centrale de Nantes, FR

Jean-Marie Normand:

  • Virtual Reality Major, “Computer Graphics”, 24h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Fundamentals of Virtual Reality”, 14h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Computer Vision and Augmented Reality”, 26h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Advanced Concepts in VR/AR”, 24h, M1/M2, École Centrale de Nantes, FR
  • Virtual Reality Major, “Projects on Virtual Reality”, 20h, M1/M2, École Centrale de Nantes, FR
  • Computer Science major, “Object-Oriented Programming in Java”, 30h, M1/M2, École Centrale de Nantes, FR
  • Master AI-ViC: “Augmented Reality”, 6h, M2, École Polytechnique, FR
  • Master MTI3D: “Virtual Embodiement”, 3h, M2, ENSAM Laval, FR

11.2.2 Supervision

  • PhD: Hakim Si-Mohammed, “Design and Study of Interactive Systems based on Brain Computer Interfaces and Augmented Reality”, INSA de Rennes, Defended December 3rd, 2019, Supervised by Anatole Lécuyer, Géry Casiez (Mjolnir, Inria) and Ferran Argelaguet
  • PhD: Xavier de Tinguy, “Haptic manipulation in virtual environments”, Defended December 15th, 2020, Supervised by Maud Marchal, Claudio Pacchierotti (Rainbow, Inria) and Anatole Lécuyer
  • PhD: Rebecca Fribourg, “Perception and interaction with and via avatars”, Defended November 4th, 2020, Supervised by Ferran Argelaguet, Ludovic Hoyet (Mimetic, Inria) and Anatole Lécuyer
  • PhD: Flavien Lécuyer, “Interactive digital introspection methods for archeology”, Defended September 11th, 2020, Supervised by Valérie Gouranton, Grégor Marchand (CNRS UMR CREAAH) and Bruno Arnaldi
  • PhD: Guillaume Bataille, “Natural interactions with IoT using VR/AR”, Defended November 13th, 2020, Supervised by Valérie Gouranton, Danielle Pelé and Jérémy Lacoche (Orange Labs) and Bruno Arnaldi
  • PhD: Etienne Peillard, “Improving Perception and Interaction in Augmented Reality”, Defended November 24th, 2020, Supervised by Guillaume Moreau, Ferran Argelaguet, Anatole Lécuyer and Jean-Marie Normand
  • PhD: Romain Terrier, “Presence of self and others in a collaborative virtual environment”, Defended December 29th, 2020, Supervised by Valérie Gouranton, Nico Pallamin (b<>com), Cédric Bach (HDG) and Bruno Arnaldi
  • PhD in progress: Mathis Fleury, “Neurofeedback based on fMRI and EEG”, Started in November 2017, Supervised by Anatole Lécuyer and Christian Barillot (Visages, Inria)
  • PhD in progress: Tiffany Luong, “Affective VR: acquisition, modelling, and exploitation of affective states in virtual reality”, Started in February 2018, Supervised by Anatole Lécuyer, Marc Diverrez (b<>com), Ferran Argelaguet
  • PhD in progress: Hugo Brument, “Towards user-adapted interaction techniques based on human locomotion laws for navigating in virtual environments”, Started in October 2018, Supervised by Ferran Argelaguet, Maud Marchal and Anne-Hélène Olivier (MimeTIC, Inria)
  • PhD in progress: Diane Dewez, “Avatar-Based Interaction in Virtual Reality”, Started in October 2018, Supervised by Anatole Lécuyer, Ferran Argelaguet and Ludovic Hoyet (MimeTIC)
  • PhD in progress: Nicolas Olivier “Avatar stylization”, Started in January 2018. Supervised by Franck Multon (MimeTIC team) and Ferran Argelaguet
  • PhD in progress: Victor Rodrigo Mercado Garcia, “Encountered-type haptics”, Started in October 2018, Supervised by Maud Marchal and Anatole Lécuyer
  • PhD in progress: Guillaume Vailland, “Outdoor wheelchair assisted navigation: reality versus virtuality”, Started in November 2018, Supervised by Valérie Gouranton and Marie Babel (Rainbow, Inria)
  • PhD in progress: Gwendal Fouché, “Immersive Interaction and Visualization of Temporal 3D Data”, Started in October 2019, Supervised by Ferran Argelaguet, Charles Kervrann (Serpico Team) and Emmanuelle Faure (Mosaic Team).
  • PhD in progress: Adelaide Genay, “Embodiment in Augmented Reality”, Started in October 2019, Supervised by Anatole Lécuyer, Martin Hachet (Potioc, Inria)
  • PhD in progress: Martin Guy, “Physiological markers for characterizing virtual embodiment”, Started in October 2019, Supervised by Guillaume Moreau, Jean-Marie Normand and Camille Jeunet (CNRS, CLEE)
  • PhD in progress: Grégoire Richard, “Touching Avatars: The role of haptic feedback in virtual embodiment”, Started in October 2019, Supervised by Géry Casiez (Loki, Inria), Thomas Pietzrak (Loki, Inria), Anatole Lécuyer and Ferran Argelaguet
  • PhD in progress: Sebastian Vizcay, “Dexterous Interaction in Virtual Reality using High-Density Electrotactile Feedback”, Started in November 2019, Supervised by Ferran Argelaguet, Maud Marchal and Claudio Pacchierotti (Rainbow, Inria)
  • PhD in progress: Salomé Le Franc, “HAptic Neurofeedback Design for Stroke”, Started in January 2019, Supervised by Anatole Lécuyer, Isabelle Bonan and Mélanie Cogné
  • PhD in progress: Lysa Gramoli, “Simulation of autonomous agents in connected virtual environments”, Started in October 2020, Supervised by Valérie Gouranton, Bruno Arnaldi, Jérémy Lacoche (Orange), Anthony Foulonneau (Orange)
  • PhD in progress: Vincent Goupil, “Hospital 2.0: Generation of Virtual Reality Applications by BIM Extraction”, Started in October 2020, Supervised by Valérie Gouranton, Bruno Arnaldi, Anne-Solène Michaud (Vinci Construction)

11.2.3 Juries

  • Anatole Lécuyer was president and referee for the PhD Thesis of Grégoire Dupont de Dinechin.
  • Bruno Arnaldi was referee for the HDR of Laure Leroy (Paris 8), president of the committee for the PhD defense of Hadrien Gurnel (b<>com), Alexis Souchet (Paris 8) and Simon Hilt (ENS Rennes).
  • Valérie Gouranton was examiner for the PhD Thesis of Mehdi Hafsia, member of an INSERM engineer recruitment jury member of the selection committee for a associate professor recruitment at the Sorbonne University
  • Guillaume Moreau was referee for the PhD defense of Jason Rambach (DE), Jérôme Nicolle, Pierre Raimbaud, president of the committee for Lucile Riaboff and Rebecca Fribourg and examiner for the thesis of Aylen Ricca.

11.3 Popularization

11.3.1 Articles and Media Appearances

  • “Sciences et Avenir” : article on our results related to “danger in virtual reality” (December)
  • “Ouest-France” : article on our results related to "virtual embodiment" (November)
  • “Sciences et Avenir” : article on our results related to “virtual co-embodiment” (November)
  • “France 5 TV - Magazine de la Santé” : interview of Anatole Lécuyer (February)
  • “RTL Radio” : interview of Anatole Lécuyer (January)
  • “Sciences Ouest” : interview of Anatole Lécuyer, Yoann Launey, Justine Saint-Aubert and Mélanie Cogné : Covid-19 : la rééducation grâce à la réalité virtuelle (November)

11.3.2 Interventions

  • “DirtyBiology” : interview of Anatole Lécuyer in video “irréalité virtuelle” (December)
  • “Telegraphe Toulon” : podcast on virtual reality with Anatole Lécuyer (December)
  • “Festival Numok 2020” : webinar from Anatole Lécuyer (November)
  • “Soirée Jeunes chercheurs de l'INCR” : presentation from Léa Pillette (Rennes, October)
  • “Je dis Science” : seminar with Anatole Lécuyer (Paris, October)
  • “Arts et Sciences - Les batisseurs” : seminar with Ronan Gaugne and Valérie Gouranton (Saint-Malo, October)
  • Festival Ars Electronika : artistic performance in Immersia (Rennes, September), Valérie Gouranton, Ronan Gaugne, Florian Nouviale
  • “Person-to-Person interactions: from analysis to applications” : workshop with Rebecca Fribourg (September)
  • “MT180 - Ma thèse en 180 secondes” : presentation from Diane Dewez (Rennes, March)
  • “Matinales Rennes Atalante”: presentation from Anatole Lécuyer and Hakim Si-Mohammed (Rennes, January)
  • “Mardi de l'Espace des Sciences” : Presentation from Anatole Lécuyer (Rennes, January)
  • “Journées Européenne de l’Archéologie”: Remote Demos made by Bruno Arnaldi, Ronan Gaugne and Valérie Gouranton (June 2020)

12 Scientific production

12.1 Major publications

  • 1 inproceedingsFerranF. Argelaguet Sanz, LudovicL. Hoyet, MichaëlM. Trico and AnatoleA. Lécuyer. The role of interaction in virtual embodiment: Effects of the virtual hand representationIEEE Virtual RealityGreenville, United StatesMarch 2016, 3-10
  • 2 articleMarie-StéphanieM.-S. Bracq, EstelleE. Michinov, BrunoB. Arnaldi, BenoîtB. Caillaud, BernardB. Gibaud, ValérieV. Gouranton and PierreP. Jannin. Learning procedural skills with a virtual reality simulator An acceptability studyNurse Education Today79August 2019, 153-160
  • 3 article XavierX. De Tinguy, ClaudioC. Pacchierotti, AnatoleA. Lécuyer and MaudM. Marchal. Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR IEEE Transactions on Visualization and Computer Graphics January 2021
  • 4 article MathisM. Fleury, GiuliaG. Lioi, ChristianC. Barillot and AnatoleA. Lécuyer. A Survey on the Use of Haptic Feedback for Brain-Computer Interfaces and Neurofeedback Frontiers in Neuroscience 1 June 2020
  • 5 articleRebeccaR. Fribourg, FerranF. Argelaguet Sanz, AnatoleA. Lécuyer and LudovicL. Hoyet. Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of ViewIEEE Transactions on Visualization and Computer Graphics265May 2020, 2062-2072
  • 6 articleRonanR. Gaugne, FrançoiseF. Labaune-Jean, DominiqueD. Fontaine, GaétanG. Le Cloirec and ValérieV. Gouranton. From the engraved tablet to the digital tablet, history of a fifteenth century music scoreJournal on Computing and Cultural Heritage1332020, 1-18
  • 7 articleFlavienF. Lécuyer, ValérieV. Gouranton, AurélienA. Lamercerie, AdrienA. Reuzeau, BrunoB. Arnaldi and BenoîtB. Caillaud. Unveiling the implicit knowledge, one scenario at a timeVisual Computer2020, 1-12
  • 8 inproceedingsFlavienF. Lécuyer, ValérieV. Gouranton, AdrienA. Reuzeau, RonanR. Gaugne and BrunoB. Arnaldi. Create by doing - Action sequencing in VRCGI 2019 - Computer Graphics International, Advances in Computer GraphicsCalgary, CanadaSpringer International PublishingJune 2019, 329-335
  • 9 articleVictorV. Mercado, MaudM. Marchal and AnatoleA. Lécuyer. ENTROPiA: Towards Infinite Surface Haptic Displays in Virtual Reality Using Encountered-Type Rotating PropsIEEE Transactions on Visualization and Computer Graphics273March 2021, 2237-2243
  • 10 bookGuillaumeG. Moreau, BrunoB. Arnaldi and PascalP. Guitton. Virtual Reality, Augmented Reality: myths and realitiesComputer engineering seriesISTEMarch 2018, 322
  • 11 articleThéophaneT. Nicolas, RonanR. Gaugne, CédricC. Tavernier, QuentinQ. Petit, ValérieV. Gouranton and BrunoB. Arnaldi. Touching and interacting with inaccessible cultural heritagePresence: Teleoperators and Virtual Environments2432015, 265-277
  • 12 inproceedingsEtienneE. Peillard, YutaY. Itoh, Jean-MarieJ.-M. Normand, FerranF. Argelaguet Sanz, GuillaumeG. Moreau and AnatoleA. Lécuyer. Can Retinal Projection Displays Improve Spatial Perception in Augmented Reality?ISMAR 2020 - 19th IEEE International Symposium on Mixed and Augmented Reality2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)Recife, BrazilIEEENovember 2020, 124-133
  • 13 inproceedingsEtienneE. Peillard, ThomasT. Thebaud, Jean-MarieJ.-M. Normand, FerranF. Argelaguet Sanz, GuillaumeG. Moreau and AnatoleA. Lécuyer. Virtual Objects Look Farther on the Sides: The Anisotropy of Distance Perception in Virtual RealityVR 2019 - 26th IEEE Conference on Virtual Reality and 3D User InterfacesOsaka, JapanIEEEMarch 2019, 227-236
  • 14 articleHakimH. Si-Mohammed, JimmyJ. Petit, CamilleC. Jeunet, FerranF. Argelaguet Sanz, FabienF. Spindler, AndéolA. Evain, NicolasN. Roussel, GéryG. Casiez and AnatoleA. Lécuyer. Towards BCI-based Interfaces for Augmented Reality: Feasibility, Design and EvaluationIEEE Transactions on Visualization and Computer Graphics263March 2020, 1608-1621

12.2 Publications of the year

International journals

International peer-reviewed conferences

  • 36 inproceedingsAlexandreA. Audinot, DianeD. Dewez, GwendalG. Fouché, RebeccaR. Fribourg, ThomasT. Howard, FlavienF. Lécuyer, TiffanyT. Luong, VictorV. Mercado, AdrienA. Reuzeau, ThomasT. Rinnert, GuillaumeG. Vailland and FerranF. Argelaguet. 3Dexterity: Finding your place in a 3-armed world2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)Atlanta, United StatesMarch 2020, 521-522
  • 37 inproceedingsGuillaumeG. Bataille, ValérieV. Gouranton, JérémyJ. Lacoche, DanielleD. Pelé and BrunoB. Arnaldi. A Unified Design & Development Framework for Mixed Interactive SystemsVISIGRAPP 2020 - 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and ApplicationsValletta, MaltaFebruary 2020, 1-12
  • 38 inproceedingsHugoH. Brument, MaudM. Marchal, Anne-HélèneA.-H. Olivier and FerranF. Argelaguet Sanz. Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual EnvironmentsEuroVR 2020 - 17th EuroVR International ConferenceValencia, SpainOctober 2020, 20-40
  • 39 inproceedingsHugoH. Brument, Anne-HélèneA.-H. Olivier, MaudM. Marchal and FerranF. Argelaguet. Does the Control Law Matter? Characterization and Evaluation of Control Laws for Virtual Steering NavigationICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual EnvironmentsFlorida, United StatesDecember 2020, 1-10
  • 40 inproceedingsDianeD. Dewez, LudovicL. Hoyet, AnatoleA. Lécuyer and FerranF. Argelaguet. Studying the Inter-Relation Between Locomotion Techniques and Embodiment in Virtual RealityIEEE International Symposium on Mixed and Augmented Reality (ISMAR)Arrecife, Brazil2020, 1-10
  • 41 inproceedingsRebeccaR. Fribourg, EvanE. Blanpied, LudovicL. Hoyet, AnatoleA. Lécuyer and FerranF. Argelaguet Sanz. Influence of Threat Occurrence and Repeatability on the Sense of Embodiment and Threat Response in VRICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual EnvironmentsFlorida, United StatesDecember 2020, 1-9
  • 42 inproceedings AdélaïdeA. Genay, AnatoleA. Lécuyer and MartinM. Hachet. Incarner un Avatar en Réalité Augmentée : Revue de la Littérature WACAI 2020 - Workshop sur les Affects, Compagnons artificiels et Interactions Saint Pierre d'Oléron, France June 2020
  • 43 inproceedingsGabrielG. Giraldo, MyriamM. Servières and GuillaumeG. Moreau. Perception of Multisensory Wind Representation in Virtual RealityISMAR 2020 - 19th IEEE International Symposium on Mixed and Augmented Reality2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)Recife, Brazilhttps://drive.google.com/file/d/18QsTtMkUwOoPOovHDs5zqC1aIUrKNXys/viewNovember 2020, 89-97
  • 44 inproceedingsYutaroY. Hirao, TuukkaT. Takala and AnatoleA. Lécuyer. Comparing Motion-based Versus Controller-based Pseudo-haptic Weight Sensations in VRVRW 2020 - IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and WorkshopsAtlanta / Virtual, United StatesMarch 2020, 305-310
  • 45 inproceedings JiayiJ. Hong, FerranF. Argelaguet Sanz, AlainA. Trubuil and TobiasT. Isenberg. Design and Evaluation of Three Selection Techniques for Tightly Packed 3D Objects in Cell Lineage Specification in Botany Graphics Interface Mississauga, Canada May 2021
  • 46 inproceedingsRomainR. Lagneau, AlexandreA. Krupa and MaudM. Marchal. Active Deformation through Visual Servoing of Soft ObjectsICRA 2020 - IEEE International Conference on Robotics and AutomationParis, FranceMay 2020, 1-7
  • 47 inproceedingsTiffanyT. Luong, FerranF. Argelaguet Sanz, NicolasN. Martin and AnatoleA. Lécuyer. Introducing Mental Workload Assessment for the Design of Virtual Reality Training ScenariosVR 2020 - IEEE Conference on Virtual Reality and 3D User InterfacesVirtual, United StatesMarch 2020, 662-671
  • 48 inproceedingsTiffanyT. Luong, NicolasN. Martin, AnaisA. Raison, FerranF. Argelaguet, Jean-MarcJ.-M. Diverrez and AnatoleA. Lécuyer. Towards Real-Time Recognition of Users' Mental Workload Using Integrated Physiological Sensors Into a VR HMDISMAR 2020 - IEEE International Symposium on Mixed and Augmented RealityVirtual, BrazilNovember 2020, 1-13
  • 49 inproceedingsMaudM. Marchal, GerardG. Gallagher, AnatoleA. Lécuyer and ClaudioC. Pacchierotti. Can Stiffness Sensations be Rendered in Virtual Reality Using Mid-air Ultrasound Haptic Technologies?EUROHAPTICS 2020 - 12th International Conference on HapticsLeiden, NetherlandsSeptember 2020, 1-8
  • 50 inproceedingsVictorV. Mercado, MaudM. Marchal and AnatoleA. Lécuyer. Design and Evaluation of Interaction Techniques Dedicated to Integrate Encountered-Type Haptic Displays in Virtual EnvironmentsIEEE Conference on Virtual Reality and 3D User InterfacesAtlanta / Virtual, United StatesMarch 2020, 230-238
  • 51 inproceedingsNicolasN. Olivier, LudovicL. Hoyet, FerranF. Argelaguet, FabienF. Danieau, QuentinQ. Avril, PhilippeP. Guillotel, AnatoleA. Lécuyer and FranckF. Multon. The impact of stylization on face recognitionSAP 2020 - ACM Symposium on Applied PerceptionVirtual, United StatesSeptember 2020, 1-9
  • 52 inproceedingsEtienneE. Peillard, YutaY. Itoh, Jean-MarieJ.-M. Normand, FerranF. Argelaguet Sanz, GuillaumeG. Moreau and AnatoleA. Lécuyer. Can Retinal Projection Displays Improve Spatial Perception in Augmented Reality?ISMAR 2020 - 19th IEEE International Symposium on Mixed and Augmented Reality2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)Recife, Brazilhttps://ismar20.org/November 2020, 124-133
  • 53 inproceedingsHakimH. Si-Mohammed, CatarinaC. Lopes-Dias, MaríaM. Duarte, FerranF. Argelaguet Sanz, CamilleC. Jeunet, GéryG. Casiez, GernotG. Müller-Putz, AnatoleA. Lécuyer and ReinholdR. Scherer. Detecting System Errors in Virtual Reality Using EEG Through Error-Related Potentials2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)VR 2020 - 27th IEEE Conference on Virtual Reality and 3D User InterfacesAtlanta, United StatesMarch 2020, 653-661
  • 54 inproceedingsRomainR. Terrier, ValérieV. Gouranton, CédricC. Bach, NicoN. Pallamin and BrunoB. Arnaldi. Scenario-based VR Framework for Product DesignVISIGRAPP 2020 - 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and ApplicationsValletta, MaltaFebruary 2020, 1-8
  • 55 inproceedings ElsaE. Thiaville, Jean-MarieJ.-M. Normand, JoeJ. Kenny and AnthonyA. Ventresque. Virtual Avatars as Children Companions : For a VR-based Educational Platform: How Should They Look Like? ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments Orlando, Florida, United States https://icat-egve-2020.org/program-schedule/ December 2020
  • 56 inproceedings GuillaumeG. Vailland, YorenY. Gaffary, LouiseL. Devigne, ValérieV. Gouranton, BrunoB. Arnaldi and MarieM. Babel. Vestibular Feedback on a Virtual Reality Wheelchair Driving Simulator: A Pilot Study HRI 2020 - ACM/IEEE International Conference on Human-Robot Interaction Cambridge, United Kingdom 2020

National peer-reviewed Conferences

  • 57 inproceedingsGuillaumeG. Vailland, YorenY. Gaffary, LouiseL. Devigne, ValérieV. Gouranton, BrunoB. Arnaldi and MarieM. Babel. Simulateur de Conduite de Fauteuil Roulant avec Retours Vestibulaires : Une Etude PiloteHandicap 2020 - 11ème Conférence sur les Aides Techniques pour les Personnes en Situation de HandicapParis, FranceNovember 2020, 1-8

Conferences without proceedings

Scientific book chapters

  • 59 inbookJean-BaptisteJ.-B. Barreau, EmmanuelE. Lanoë and RonanR. Gaugne. 3D Sketching of the Fortified Entrance of the Citadel of Aleppo from a Few Sightseeing PhotosDigital Cultural HeritageJune 2020, 359-371
  • 60 inbookXavierX. De Tinguy, Thomas M.T. Howard, ClaudioC. Pacchierotti, MaudM. Marchal and AnatoleA. Lécuyer. WeATaViX: WEarable Actuated TAngibles forVIrtual reality eXperiences12272Haptics: Science, Technology, Applications12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020, ProceedingsLeiden, NetherlandsSeptember 2020, 262-270
  • 61 inbookRomainR. Terrier, NicolasN. Martin, JérémyJ. Lacoche, ValérieV. Gouranton and BrunoB. Arnaldi. Do Distant or Colocated Audiences Affect User Activity in VR?Transactions on Computational Science XXXVIIJuly 2020, 1-18

Reports & preprints

Other scientific publications

12.3 Cited publications

  • 67 inproceedingsJean-BaptisteJ.-B. Barreau, RonanR. Gaugne, YannY. Bernard, GaétanG. Le Cloirec and ValérieV. Gouranton. The West Digital Conservatory of Archaelogical Heritage ProjectDHFrance2013, 1-8
  • 68 articleJean-BaptisteJ.-B. Barreau, FlorianF. Nouviale, RonanR. Gaugne, YannY. Bernard, SylvianeS. Llinares and ValérieV. Gouranton. An Immersive Virtual Sailing on the 18 th -Century Ship Le BoullongnePresence: Teleoperators and Virtual Environments2432015, 201-219
  • 69 book Doug AD. Bowman, ErnestE. Kruijff, Joseph JJ. LaViola and IvanI. Poupyrev. 3D User Interfaces: Theory and Practice Addison Wesley 2004
  • 70 articleAnatoleA. Lécuyer. Simulating Haptic Feedback Using Vision: A Survey of Research and Applications of Pseudo-Haptic FeedbackPresence: Teleoperators and Virtual Environments1812009, 39--53