Our research project belongs to the scientific field of Virtual Reality (VR) and 3D interaction with virtual environments. VR systems can be used in numerous applications such as for industry (virtual prototyping, assembly or maintenance operations, data visualization), entertainment (video games, theme parks), arts and design (interactive sketching or sculpture, CAD, architectural mock-ups), education and science (physical simulations, virtual classrooms), or medicine (surgical training, rehabilitation systems). A major change that we foresee in the next decade concerning the field of Virtual Reality relates to the emergence of new paradigms of interaction (input/output) with Virtual Environments (VE).
As for today, the most common way to interact with 3D content still remains by measuring user’s motor activity, i.e., his/her gestures and physical motions when manipulating different kinds of input device. However, a recent trend consists in soliciting more movements and more physical engagement of the body of the user. We can notably stress the emergence of bimanual interaction, natural walking interfaces, and whole-body involvement. These new interaction schemes bring a new level of complexity in terms of generic physical simulation of potential interactions between the virtual body and the virtual surrounding, and a challenging "trade-off" between performance and realism. Moreover, research is also needed to characterize the influence of these new sensory cues on the resulting feelings of "presence" and immersion of the user.
Besides, a novel kind of user input has recently appeared in the field of virtual reality: the user’s mental activity, which can be measured by means of a "Brain-Computer Interface" (BCI). Brain-Computer Interfaces are communication systems which measure user’s electrical cerebral activity and translate it, in real-time, into an exploitable command. BCIs introduce a new way of interacting "by thought" with virtual environments. However, current BCI can only extract a small amount of mental states and hence a small number of mental commands. Thus, research is still needed here to extend the capacities of BCI, and to better exploit the few available mental states in virtual environments.
Our first motivation consists thus in designing novel “body-based” and “mind-based” controls of virtual environments and reaching, in both cases, more immersive and more efficient 3D interaction.
Furthermore, in current VR systems, motor activities and mental activities are always considered separately and exclusively. This reminds the well-known “body-mind dualism” which is at the heart of historical philosophical debates. In this context, our objective is to introduce novel “hybrid” interaction schemes in virtual reality, by considering motor and mental activities jointly, i.e., in a harmonious, complementary, and optimized way. Thus, we intend to explore novel paradigms of 3D interaction mixing body and mind inputs. Moreover, our approach becomes even more challenging when considering and connecting multiple users which implies multiple bodies and multiple brains collaborating and interacting in virtual reality.
Our second motivation consists thus in introducing a “hybrid approach” which will mix mental and motor activities of one or multiple users in virtual reality.
The scientific objective of Hybrid team is to improve 3D interaction of one or multiple users with virtual environments, by making full use of physical engagement of the body, and by incorporating the mental states by means of brain-computer interfaces. We intend to improve each component of this framework individually, but we also want to improve the subsequent combinations of these components.
The "hybrid" 3D interaction loop between one or multiple users and a virtual environment is depicted on Figure . Different kinds of 3D interaction situations are distinguished (red arrows, bottom): 1) body-based interaction, 2) mind-based interaction, 3) hybrid and/or 4) collaborative interaction (with at least two users). In each case, three scientific challenges arise which correspond to the three successive steps of the 3D interaction loop (blue squares, top): 1) the 3D interaction technique, 2) the modeling and simulation of the 3D scenario, and 3) the design of appropriate sensory feedback.
The 3D interaction loop involves various possible inputs from the user(s) and different kinds of output (or sensory feedback) from the simulated environment. Each user can involve his/her body and mind by means of corporal and/or brain-computer interfaces. A hybrid 3D interaction technique (1) mixes mental and motor inputs and translates them into a command for the virtual environment. The real-time simulation (2) of the virtual environment is taking into account these commands to change and update the state of the virtual world and virtual objects. The state changes are sent back to the user and perceived by means of different sensory feedbacks (e.g., visual, haptic and/or auditory) (3). The sensory feedbacks are closing the 3D interaction loop. Other users can also interact with the virtual environment using the same procedure, and can eventually “collaborate” by means of “collaborative interactive techniques” (4).
This description is stressing three major challenges which correspond to three mandatory steps when designing 3D interaction with virtual environments:
3D interaction techniques: This first step consists in translating the actions or intentions of the user (inputs) into an explicit command for the virtual environment. In virtual reality, the classical tasks that require such kinds of user command were early categorized in four : navigating the virtual world, selecting a virtual object, manipulating it, or controlling the application (entering text, activating options, etc). The addition of a third dimension, the use of stereoscopic rendering and the use of advanced VR interfaces make however inappropriate many techniques that proved efficient in 2D, and make it necessary to design specific interaction techniques and adapted tools. This challenge is here renewed by the various kinds of 3D interaction which are targeted. In our case, we consider various cases, with motor and/or cerebral inputs, and potentially multiple users.
Modeling and simulation of complex 3D scenarios: This second step corresponds to the update of the state of the virtual environment, in real-time, in response to all the potential commands or actions sent by the user. The complexity of the data and phenomena involved in 3D scenarios is constantly increasing. It corresponds for instance to the multiple states of the entities present in the simulation (rigid, articulated, deformable, fluids, which can constitute both the user’s virtual body and the different manipulated objects), and the multiple physical phenomena implied by natural human interactions (squeezing, breaking, melting, etc). The challenge consists here in modeling and simulating these complex 3D scenarios and meeting, at the same time, two strong constraints of virtual reality systems: performance (real-time and interactivity) and genericity (e.g., multi-resolution, multi-modal, multi-platform, etc).
Immersive sensory feedbacks: This third step corresponds to the display of the multiple sensory feedbacks (output) coming from the various VR interfaces. These feedbacks enable the user to perceive the changes occurring in the virtual environment. They are closing the 3D interaction loop, making the user immersed, and potentially generating a subsequent feeling of presence. Among the various VR interfaces which have been developed so far we can stress two kinds of sensory feedback: visual feedback (3D stereoscopic images using projection-based systems such as CAVE systems or Head Mounted Displays); and haptic feedback (related to the sense of touch and to tactile or force-feedback devices). The Hybrid team has a strong expertize in haptic feedback, and in the design of haptic and “pseudo-haptic” rendering . Note that a major trend in the community, which is strongly supported by the Hybrid team, relates to a “perception-based” approach, which aims at designing sensory feedbacks which are well in line with human perceptual capacities.
These three scientific challenges are addressed differently according to the context and the user inputs involved. We propose to consider three different contexts, which correspond to the three different research axes of the Hybrid research team, namely : 1) body-based interaction (motor input only), 2) mind-based interaction (cerebral input only), and then 3) hybrid and collaborative interaction (i.e., the mixing of body and brain inputs from one or multiple users).
The scientific activity of Hybrid team follows three main axes of research:
Body-based interaction in virtual reality. Our first research axis concerns the design of immersive and effective "body-based" 3D interactions, i.e., relying on a physical engagement of the user’s body. This trend is probably the most popular one in VR research at the moment. Most VR setups make use of tracking systems which measure specific positions or actions of the user in order to interact with a virtual environment. However, in recent years, novel options have emerged for measuring “full-body” movements or other, even less conventional, inputs (e.g. body equilibrium). In this first research axis we are thus concerned by the emergence of new kinds of “body-based interaction” with virtual environments. This implies the design of novel 3D user interfaces and novel 3D interactive techniques, novel simulation models and techniques, and novel sensory feedbacks for body-based interaction with virtual worlds. It involves real-time physical simulation of complex interactive phenomena, and the design of corresponding haptic and pseudo-haptic feedback.
Mind-based interaction in virtual reality. Our second research axis concerns the design of immersive and effective “mind-based” 3D interactions in Virtual Reality. Mind-based interaction with virtual environments is making use of Brain-Computer Interface technology. This technology corresponds to the direct use of brain signals to send “mental commands” to an automated system such as a robot, a prosthesis, or a virtual environment. BCI is a rapidly growing area of research and several impressive prototypes are already available. However, the emergence of such a novel user input is also calling for novel and dedicated 3D user interfaces. This implies to study the extension of the mental vocabulary available for 3D interaction with VE, then the design of specific 3D interaction techniques "driven by the mind" and, last, the design of immersive sensory feedbacks that could help improving the learning of brain control in VR.
Hybrid and collaborative 3D interaction. Our third research axis intends to study the combination of motor and mental inputs in VR, for one or multiple users. This concerns the design of mixed systems, with potentially collaborative scenarios involving multiple users, and thus, multiple bodies and multiple brains sharing the same VE. This research axis therefore involves two interdependent topics: 1) collaborative virtual environments, and 2) hybrid interaction. It should end up with collaborative virtual environments with multiple users, and shared systems with body and mind inputs.
The research program of Hybrid team aims at next generations of virtual reality and 3D user interfaces which could possibly address both the “body” and “mind” of the user. Novel interaction schemes are designed, for one or multiple users. We target better integrated systems and more compelling user experiences.
The applications of our research program correspond to the applications of virtual reality technologies which could benefit from the addition of novel body-based or mind-based interaction capabilities:
Industry: with training systems, virtual prototyping, or scientific visualization;
Medicine: with rehabilitation and reeducation systems, or surgical training simulators;
Entertainment: with 3D web navigations, video games, or attractions in theme parks,
Construction: with virtual mock-ups design and review, or historical/architectural visits.
OpenViBE is a free and open-source software platform devoted to the design, test and use of Brain-Computer Interfaces (BCI). The platform consists of a set of software modules that can be integrated easily and efficiently to design BCI applications. The key features of OpenViBE software are its modularity, its high-performance, its portability, its multiple-users facilities and its connection with high-end/VR displays. The “designer” of the platform enables to build complete scenarios based on existing software modules using a dedicated graphical language and a simple Graphical User Interface (GUI). This software is available on the Inria Forge under the terms of the AGPL licence, and it was officially released in June 2009. Since then, the OpenViBE software has already been downloaded more than 30000 times, and it is used by numerous laboratories, projects, or individuals worldwide. The OpenViBE software is supported and improved in the frame of OpenViBE-NT project (section ). More information, downloads, tutorials, videos, documentations are available on the OpenViBE website.
The aim of Collaviz software (collaborative interactive visualization) is to allow to design, deploy and share collaborative virtual environments (CVE). Collaviz allows VR developpers to concentrate on the behavior of virtual objects that can be shared between users in a CVE. Indeed, Collaviz provides a software architecture that hides the network programmation details of the distribution and the synchronization of the content of the CVE, and that facilitates the coupling with the 3D graphics API used for rendering. Collaviz is written mainly in Java and is runnable on multiple hardware configurations: laptop or desktop computer, immersive room, mobile devices. The PAC-C3D software architecture of Collaviz makes it possible to use various 3D APIs for graphic rendering: Java3D, jReality, jMonkeyEngine, OpenSG, Unity3D (work in progress) and Havok Anarchy (work in progress), and also to use various physical engines such as jBullet and SOFA. The distribution over the network can be achieved using TCP or HTTP. A collaboration with DIVERSE team intended to extend Collaviz using a Model Driven Engineering approach in order to provide high-level tools to generate a large part of java code of virtual objects.
Paper from Merwan Achibet, Maud Marchal, Ferran Argelaguet and Anatole Lécuyer received the "Best Paper Award" at IEEE Symposium on 3D User Interfaces 2014 (IEEE 3DUI'14).
Paper from Jean-Baptiste Barreau, Valérie Gouranton received the "Third Best Poster Award" at International Conference on Cultural Heritage 2014.
Evaluation of Direct Manipulation using Finger Tracking for Complex Tasks in an Immersive Cube Maud Marchal, Collaboration with REVES
We have proposed a solution for interaction using finger tracking in a cubic immersive virtual reality system (or immersive cube) . Rather than using a traditional wand device, users can manipulate objects with fingers of both hands in a close-to-natural manner for moderately complex, general purpose tasks. Our solution couples finger tracking with a real-time physics engine, combined with a heuristic approach for hand manipulation, which is robust to tracker noise and simulation instabilities. A first study has been performed to evaluate our interface, with tasks involving complex manipulations, such as balancing objects while walking in the cube. The users finger-tracked manipulation was compared to manipulation with a 6 degree-of-freedom wand (or flystick), as well as with carrying out the same task in the real world. Users were also asked to perform a free task, allowing us to observe their perceived level of presence in the scene. Our results show that our approach provides a feasible interface for immersive cube environments and is perceived by users as being closer to the real experience compared to the wand. However, the wand outperforms direct manipulation in terms of speed and precision. We conclude with a discussion of the results and implications for further research
A New Direct Manipulation Technique for Immersive 3D Virtual Environments Thi Thuong Huyen Nguyen, Thierry Duval, Collaboration with MIMETIC
We have introduced a new 7-Handle manipulation technique for 3D objects in immersive virtual environments and its evaluation. The 7-Handle technique includes a set of seven points which are flexibly attached to an object. There are three different control modes for these points including configuration, manipulation and locking/ unlocking modes. We have conducted an experiment to compare the efficiency of this technique with the traditional 6-DOF direct manipulation technique in terms of time, discomfort metrics and subjective estimation for precise manipulations in an immersive virtual environment in two consecutive phases: an approach phase and a refinement phase. The statistical results showed that the completion time in the approach phase of the 7-Handle technique was significantly longer than the completion time of the 6-DOF technique. Nevertheless, we found a significant interaction effect between the two factors (the manipulation technique and the object size) on the completion time of the refinement phase. In addition, even though we did not find any significant differences between the two techniques in terms of intuitiveness, ease of use and global preference in the result of subjective data, we obtained a significantly better satisfaction feedback from the subjects for the efficiency and fatigue criteria.
A survey of plasticity in 3D user interfaces Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Collaboration with b<>com
Plasticity of 3D user interfaces refers to their capabilities to automatically fit to a set of hardware and environmental constraints. This area of research has already been deeply explored in the domain of traditional 2D user interfaces. Besides, during the last decade, interest in 3D user interfaces has grown. Designers find with 3D user interfaces new ways to promote and interact with data, such as e-commerce websites, scientific data visualization, etc. Because of the wide variety of Virtual Reality (VR) and Augmented Reality (AR) applications in terms of hardware, data and target users, there is a real interest in solutions for automatic adaption in order to improve the user experience in any context while reducing the development costs. An adaptation is performed in reaction to different criteria defining a system such as the targeted hardware platform, the user’s context and the structure and the semantic of the manipulated data. This adaptation can then impact the system in different ways, especially content presentation, interaction techniques modifications and eventually the current distribution of the system across a set of available devices. In we present the state of the art about plastic 3D user interfaces. Moreover, we present well known methods in the field of 2D user interfaces that could become relevant for 3D user interfaces.
Adaptive Navigation in Virtual Environments Ferran Argelaguet
Navigation speed for most navigation interfaces is still determined by rate-control devices (e.g. joystick). The interface designer is in charge of adjusting the range of optimal speeds according to the scale of the environment and the desired user experience. However, this approach is not valid for complex environments (e.g. multi-scale environments). Optimal speeds might vary for each section of the environment, leading to non-desired side effects such as collisions or simulator sickness. Thereby, we proposed a speed adaptation algorithm based on the spatial relationship between the user and the environment and the user’s perception of motion. The computed information is used to adjust the navigation speed in order to provide an optimal navigation speed and avoid collisions. Two main benefits of our approach is firstly, the ability to adapt the navigation speed in multi-scale environments and secondly, the capacity to provide a smooth navigation experience by decreasing the jerkiness of described trajectories. The evaluation showed that our approach provides comparable performance as existing navigation techniques but it significantly decreases the jerkiness of described trajectories
Toward “Pseudo-Haptic Avatars”: Modifying the Visual Animation ofSelf-Avatar Can Simulate the Perception of Weight Lifting, Ferran Argelaguet, Anatole Lécuyer, Collaboration with MIMETIC
We have studied how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions . In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user’s gestures and the avatar, 2) different motion profiles of the animation ,and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode.
The Virtual Mitten: A Novel Interaction Paradigm for Visuo-Haptic Manipulation of Objects Using Grip Force Merwan Achibet, Maud Marchal, Ferran Argelaguet, Anatole Lécuyer
We have proposed a novel visuo-haptic interaction paradigm called the “Virtual Mitten” for simulating the 3D manipulation of objects. Our approach introduces an elastic handheld device that provides a passive haptic feedback through the fingers and a mitten interaction metaphor that enables to grasp and manipulate objects. The grasping performed by the mitten is directly correlated with the grip force applied on the elastic device and a supplementary pseudo-haptic feedback modulates the visual feedback of the interaction in order to simulate different haptic perceptions. The Virtual Mitten allows natural interaction and grants users with an extended freedom of movement compared with rigid devices with limited workspaces. Our approach has been evaluated within two experiments focusing both on subjective appreciation and perception. Our results show that participants were able to well perceive different levels of effort during basic manipulation tasks thanks to our pseudo-haptic approach. They could also rapidly appreciate how to achieve different actions with the Virtual Mitten such as opening a drawer or pulling a lever. Taken together, our results suggest that our novel interaction paradigm could be used in a wide range of applications involving one or two-hand haptic manipulation such as virtual prototyping, virtual training or video game.
Collaborative Pseudo-Haptics: Two-User Stiffness Discrimination Based on Visual Feedback Ferran Argelaguet, Takuya Sato, Thierry Duval, Anatole Lécuyer, Collaboration with Tohoku University Research Institute of Electrical Communication
We have explored how the concept of pseudo-haptic feedback can be introduced in a collaborative scenario . A remote collaborative scenario in which two users interact with a deformable object is presented. Each user, through touch-based input, is able to interact with a deformable virtual object displayed in a standard display screen. The visual deformation of the virtual object is driven by a pseudo-haptic approach taking into account both the user in-put and the simulated physical properties. Particularly, we investigated stiffness perception. In order to validate our approach, we tested our system in a single and two-user configuration. The results showed that users were able to discriminate the stiffness of the virtual object in both conditions with a comparable performance. Thus, pseudo-haptic feedback seems a promising tool for providing multiple users with physical information related to other users' interactions.
Sonic interaction with a virtual orchestra of factory machinery Florian Nouviale, Valérie Gouranton, collaboration with Ronan Gaugne (IMMERSIA) and LIMSI
We have conceived an immersive application where users receive sound and visual feedbacks on their interactions with a virtual environment. In this application, the users play the part of conductors of an orchestra of factory machines since each of their actions on interaction devices triggers a pair of visual and audio responses. Audio stimuli were spatialized around the listener. The application was exhibited during the 2013 Science and Music day and designed to be used in a large immersive system with head tracking, shutter glasses and a 10.2 loudspeaker configuration .
Audio-Visual Attractors for Capturing Attention to the Screens When Walking in CAVE Systems Ferran Argelaguet, Valérie Gouranton, Anatole Lécuyer, collaboration with Aalborg University
In four-sided CAVE-like VR systems, the absence of the rear wall has been shown to decrease the level of immersion and can introduce breaks in presence. We have therefore investigated to which extent user's attention can be driven by visual and auditory stimuli in a four-sided CAVE-like system . An experiment was conducted in order to analyze how user attention is diverted while physically walking in a virtual environment, when audio and/or visual attractors are present. The four sided CAVE used in the experiment allowed to walk up to 9m in straight line. An additional key feature in the experiment is the fact that auditory feedback was delivered through binaural audio rendering techniques via non-personalized head related transfer functions (HRTFs). The audio rendering was dependent on the user's head position and orientation, enabling localized sound rendering. The experiment analyzed how different "attractors" (audio and/or visual, static or dynamic) modify the user's attention. The results of the conducted experiment show that audio-visual attractors are the most efficient attractors in order to keep the user's attention toward the inside of the CAVE. The knowledge gathered in the experiment can provide guidelines to the design of virtual attractors in order to keep the attention of the user and avoid the "missing wall".
Immersia, an open immersive infrastructure: doing archaeology in virtual reality Valérie Gouranton, Bruno Arnaldi, collaboration with MIMETIC and Ronan Gaugne (IMMERSIA)
We have first studied the mutual enrichment between archaeology and virtual reality . To do so, we are considering Immersia, our open high-end platform dedicated to research on immersive virtual reality and its usages. Immersia is a node of the european project Visionair that offers an infrastructure for high level visualisation facilities open to research communities across Europe. In Immersia, two projects are currently active on the theme of archaeology. One is relative to the study of the Cairn of Carn, with the Creaah, a pluridisciplinary research laboratory of archeology and archeosciences, and one on the reconstitution of the gallo-roman villa of Bais, with the French institute INRAP.
Virtual reality tools for the West Digital Conservatory of Archaeological Heritage Jean-Baptiste Barreau, Valérie Gouranton, collaboration with Ronan Gaugne (IMMERSIA) and INRAP
In continuation of the 3D data production work made by the WDCAH (West Digital Conservatory of Archaeological Heritage), the use of virtual reality tools allows archaeologists to carry out analysis and understanding research about their sites. We have then focused on the virtual reality services proposed to archaeologists in the WDCAH, through the example of two archaeological sites, the Temple de Mars in Corseul and the Cairn of Carn Island .
Preservative Approach to Study Encased Archaeological Artefacts Valérie Gouranton, Bruno Arnaldi, collaboration with Ronan Gaugne (IMMERSIA) and INRAP
We have proposed a workflow based on a combination of computed tomography, 3D images and 3D printing to analyse different archaeological material dating from the Iron Age, a weight axis, a helical piece, and a fibula . This workflow enables a preservative analysis of the artefacts that are unreachable because encased either in stone, corrosion or ashes. Computed tomography images together with 3D printing provide a rich toolbox for archaeologist work allowing to access a tangible representation of hidden artefacts. These technologies are combined in an efficient, affordable and accurate workflow compatible with preventive archaeology constraints.
Combination of 3D Scanning, Modeling and Analyzing Methods around the Castle of Coatfrec Reconstitution Jean Baptiste Barreau, Valérie Gouranton, collaboration with Ronan Gaugne (IMMERSIA) and INRAP
The castle of Coatfrec is a medieval castle in Brittany constituting merely a few remaining ruins currently in the process of restoration. Beyond its great archeological interest, it has become, over the course of the last few years, the subject of experimentation in digital archeology. Methods of 3D scanning were used in order to gauge comparisons between the remaining structures and their absent hypothetical ones, resulting in the first quantitative results of its kind. We have applied these methods and presented the subsequent results obtained using these new digital tools .
Ceramics Fragments Digitization by Photogrammetry, Reconstructions and Applications Jean Baptiste Barreau, Valérie Gouranton, collaboration with Ronan Gaugne (IMMERSIA) and INRAP
We have studied an application of photogrammetry on ceramic fragments from two excavation sites located north-west of France . The restitution by photogrammetry of these different fragments allowed reconstructions of the potteries in their original state or at least to get to as close as possible. We used the 3D reconstructions to compute some metrics and to generate a presentation support by using a 3D printer. This work is based on affordable tools and illustrates how 3D technologies can be quite easily integrated in archaeology process with limited financial resources.
Fast collision detection for fracturing rigid bodies Loeiz Glondu, Maud Marchal
In complex scenes with many objects, collision detection plays a key role in the simulation performance. This is particularly true in fracture simulation for two main reasons. One is that fracture fragments tend to exhibit very intensive contact, and the other is that collision detection data structures for new fragments need to be computed on the fly. In , we present novel collision detection algorithms and data structures for real-time simulation of fracturing rigid bodies. We build on a combination of well-known efficient data structures, namely distance fields and sphere trees, making our algorithm easy to integrate on existing simulation engines. We propose novel methods to construct these data structures, such that they can be efficiently updated upon fracture events and integrated in a simple yet effective self-adapting contact selection algorithm. Altogether, we drastically reduce the cost of both collision detection and collision response. We have evaluated our global solution for collision detection on challenging scenarios, achieving high frame rates suited for hard real-time applications such as video games or haptics. Our solution opens promising perspectives for complex fracture simulations involving many dynamically created rigid objects.
This work was achieved in collaboration with Miguel Otaduy and Sara Schvartzman (URJC Madrid, Spain) and Georges Dumont (MIMETIC team).
Collision detection: broad phase adaptation from multi-core to multi-GPU architecture Bruno Arnaldi, Valérie Gouranton
We have presented several contributions on the collision detection optimization centered on hardware performance. We focus on the first step (Broad-phase) and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment .
Real-time tracking of deformable target in ultrasound images Maud Marchal
In several medical applications such as liver or kidney biopsies, an anatomical region needs to be continuously tracked during the intervention. When using ultrasound (US) image modality, tracking soft tissues remains challenging due to the deformations caused by physiological motions or medical instruments, combined with the generally weak quality of the images. In order to overcome the previous limitation, different techniques based on physical model have been proposed in the literature. In , we propose an approach for tracking deformable target within 2D US images based on a physical model driven by smooth displacement field obtained from dense information. This allows to take into account highly localized deformation in the US images.
This work was achieved in collaboration with Lucas Royer and Alexandre Krupa (Lagadic team), Anthony Le Bras (CHU Rennes) and Guillaume Dardenne (IRT B-Com).
Stereoscopic Rendering of Virtual Environments with Wide Field-of-Views up to 360 Jérôme Ardouin, Anatole Lécuyer, Maud Marchal
We propose a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups shows that it is well compliant with real-time constraints, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.
This work was achieved in collaboration with Eric Marchand (Lagadic team).
A survey on bimanual haptics Anatole Lécuyer, Maud Marchal, Anthony Talvas.
When interacting with virtual objects through haptic devices, most of the time only one hand is involved. However, the increase of computational power, along with the decrease of device costs, allow more and more the use of dual haptic devices. The field which encompasses all studies of the haptic interaction with either remote or virtual environments using both hands of the same person is referred to as bimanual haptics. It differs from the common unimanual haptic field notably due to specificities of the human bimanual haptic system, e.g. the dominance of the hands, their differences in perception and their interactions at a cognitive level. These specificities call for adapted solutions in terms of hardware and software when applying the use of two hands to computer haptics. In , we review the state of the art on bimanual haptics, encompassing the human factors in bimanual haptic interaction, the currently available bimanual haptic devices, the software solutions for two-handed haptic interaction, and the existing interaction techniques.
Haptic cinematography Fabien Danieau, Anatole Lécuyer
Haptics, the technology which brings tactile or force-feedback to users, has a great potential for enhancing movies and could lead to new immersive experiences. In we introduce Haptic Cinematography which presents haptics as a new component of the filmmaker's toolkit. We propose a taxonomy of haptic effects and we introduce novel effects coupled with classical cinematographic motions to enhance video viewing experience. More precisely we propose two models to render haptic effects based on camera motions: the first model makes the audience feel the motion of the camera and the second provides haptic metaphors related to the semantics of the camera effect. Results from a user study suggest that these new effects improve the quality of experience. Filmmakers may use this new way of creating haptic effects to propose new immersive audiovisual experiences.
This work was achieved in collaboration with Marc Christie (MIMETIC team), Julien Fleureau, Philippe Guillotel and Nicolas Mollet (Technicolor).
Collaborative Virtual Training with Physical and Communicative Autonomous Agents Thomas Lopez, Valérie Gouranton, Florian Nouviale, Rozenn Bouville-Berthelot, Bruno Arnaldi
Virtual agents are a real asset in collaborative virtual environment for training (CVET) as they can replace missing team members. Collaboration between such agents and users, however, is generally limited. We presented a whole integrated model of CVET focusing on the abstraction of the real or virtual nature of the actor to define a homogenous collaboration model. First, we defined a new collaborative model of interaction. This model notably allows to abstract the real or virtual nature of a teammate. Moreover, we proposed a new role exchange approach so that actors can swap their roles during training. The model also permits the use of physically based objects and characters animation to increase the realism of the world. Second, we design a new communicative agent model, which aims at improving collaboration with other actors using dialog to coordinate their actions and to share their knowledge. Finally, we evaluated the proposed model to estimate the resulting benefits for the users and we show that this is integrated in existing CVET applications .
Exchange of avatars : Toward a better perception and understanding Thomas Lopez, Rozenn Bouville-Berthelot, Florian Nouviale, Valérie Gouranton, Bruno Arnaldi
The exchange of avatars, i.e. the actual fact of changing once avatar with another one, is a promising trend in multi-actor virtual environments. It provides new opportunities for users, such as controlling a different avatar for a specific action, retrieving knowledge belonging to a particular avatar, solving conflicts and deadlocks situations or even helping another user. Virtual Environments for Training are especially affected by this trend as a specific role derived from a scenario is usually assigned to a unique avatar. Despite the increasing use of avatar exchange, users' perception and understanding of this mechanism have not been studied. We propose two complementary user-centered evaluations that aim at comparing several representations for the exchange of avatars; these are termed exchange metaphors. Our first experiment focuses on the perception of an exchange by a user who is not involved in the exchange, and the second experiment analyzes the perception of an exchange triggered by the user. Results show that the use of visual feedback globally aids better understanding of the exchange mechanism in both cases. Our first experiment suggests, however, that visual feedback is less efficient than a simple popup notification in terms of task duration. In addition, the second experiment shows that much simpler metaphors with no visual effect are generally preferred because of their efficiency .
An interaction abstraction model for seamless avatar exchange in CVET Rozenn Bouville-Berthelot, Thomas Lopez, Florian Nouviale, Valérie Gouranton, Bruno Arnaldi
Collaboration and interaction between users and virtual humans in virtual environments is a crucial challenge, notably for Collaborative Virtual Environments for Training (CVET). A training procedure, indeed, often involves several actors: trainees, teammates and many times a trainer. Yet, a major benefit of CVET is to propose to users to be trained even if the required number of person needed by the procedure is not available. Therefore, almost every CVET use autonomous virtual humans to replace the missing person. We have proposed to improve the effective collaboration between users and virtual humans involved in a complex task within CVET. Using an entity called the "Shell", we are able to wrap the features common to both users and virtual humans. It gives us an abstraction level to pool the management of the main processes useful to control an avatar, interact with the environment and gather knowledge from a CVET. Besides, the Shell allows seamless exchange of avatars during a procedure. Thanks to the Shell, the exchange can be carried out at any time during a task while preserving all the data associated to a role in a procedure .
#SEVEN: a Sensor Effector Based Scenarios Model for Driving Collaborative Virtual Environment Guillaume Claude, Valérie Gouranton, Rozenn Bouville-Berthelot, Bruno Arnaldi
We introduced #SEVEN, a sensor effector model that enables the execution of complex scenarios for driving Virtual Reality applications. #SEVEN is based on an enhanced Petri net model which is able to describe and solve intricate event sequences. Our model also proposes several useful features for the design of collaborative scenarios for Collaborative Virtual Environments such as versatile roles and Activity Continuum. We also illustrate its usage it by describing a demonstrator that presents an implementation of our model .
Collaborative virtual environments for ergonomics: embedding the design engineer role in the loop Thierry Duval, collaboration with Charles Pontonnier and Georges Dumont (MIMETIC).
We have proposed to define the role and duties of a design engineer involved in a collaborative ergonomic design session supported by a 3D collaborative virtual environment. For example, such a session can be used to adapt the manual task an operator must achieve in the context of an industrial assembly line. We first presented the interest of such collaborative sessions. Then we presented a related work explaining the need of proper 3DCVE and metaphors to obtain efficient collaborative ergonomic design sessions. Then we proposed a use case highlighting the type of metaphor such engineers need to have to be efficient in such a framework .
Improving Awareness for 3D Virtual Collaboration by Embedding the Features of Users' Physical Environments and by Augmenting Interaction Tools with Cognitive Feedback Cues Thierry Duval, Thi Thuong Huyen Nguyen, Valérie Gouranton, collaboration with MimeTic
The feeling of presence is essential for efficient interaction within Virtual Environments (VEs). When a user is fully immersed within a VE through a large immersive display system, his/her feeling of presence can be altered because of disturbing interactions with his/her physical environment, such as collision with hardware parts of the system or loss of tracking. This alteration can be avoided by taking into account the physical features of the user as well as those of the system hardware and embedding them in the VE. Moreover, the 3D abstract representation of these physical features can also be useful for collaboration between distant users because they can make a user aware of the physical limitations of the others he/she is collaborating with. We used the Immersive Interactive Virtual Cabin (IIVC) model to obtain this virtual representation of the user's physical environment and we illustrated how this representation can be used in a collaborative navigation task in a VE. We also presented how we can add 3D representations of 2D interaction tools in order to cope with asymmetrical collaborative configurations, providing 3D cues for a user to understand the actions of the others even if he/she is not fully immersed in the shared VE .
From 3D Bimanual Toward Distant Collaborative Interaction Techniques: An Awareness Issue Morgan Le Chénechal, Thierry Duval, Valérie Gouranton, Bruno Arnaldi, collaboration with b<>com
CVE involve the use of complex interaction techniques based on specific collaborative metaphors. The design of these metaphors may be a difficult task because it has to deal with collaborative issues that came from sparse research areas (Human-Computer Interfaces, Human-Human Interactions, Networking, Physiology and Social Psychology). Metaphors for bimanual interactions have been developed for a while essentially because it is a widely spread area of interest for common tasks. Bimanual interactions involve the simultaneous use of both hands of the user in order to achieve a goal with better performances compared to uni-manual interactions thanks to a natural skill that is proprioception. This collaborative aspect could certainly be a helpful entry point in the design of efficient collaborative interaction techniques extended from improved bimanual metaphors. However, the proprioceptive sense cannot be considered in the same way, and additional features must be proposed to be able to collaborate efficiently. Thus, awareness is a key to let CVE be usable and the availability of collaborative feedbacks is essential to extend bimanual interactions toward collaborative ones. In this paper, we based our study on existing work on bimanual and collaborative interaction techniques trying to draw similarities between them. We emphasize common points between both fields that could be useful to better design both metaphors and awareness in CVE .
A survey of communication and awareness in collaborative virtual environments Thi Thuong Huyen Nguyen, Thierry Duval
In the domain of Collaborative Virtual Environments (CVEs), many virtual worlds, frameworks and techniques are built based on a specific and direct purpose. There is not a general and still good and efficient enough solution for all the collaborative systems. Depending on the purpose of the collaborative work, the techniques of interaction and of manipulation change from one application to another. Despite this difference between interaction techniques, they always benefit greatly from awareness features that help in explicating implicit knowledge related to one's own and others' working activities as well as to virtual workspace. In addition, people in CVEs also use communication channels to negotiate shared understandings of task goals, of task decomposition and of task progress. Therefore, awareness and communication are usually considered as “instruments" to complete collaborative tasks in the environment. However, few research work have been devoted to improving the awareness and the communication channels in CVEs for a better collaboration between users. We have studied the importance of awareness and communication in collaborative virtual environments. We have investigated different kinds of awareness which need to be carefully designed. We have discussed different communication means and how to cope with this diversity, so we can benefit from the availability of different peripheral devices and can find an effective communication means to work together. Finally, we have made some propositions to overcome these actual limitations of CVEs .
When model driven engineering meets virtual reality: feedback from application to the Collaviz framework Thierry Duval, collaboration with Arnaud Blouin and Jean-Marc Jézéquel (DIVERSE).
Despite the increasing use of 3D Collaborative Virtual Environments (3D CVE), their development is still a cumbersome task. The various concerns to consider (distributed system, 3D graphics, etc.) complexify the development as well as the evolution of CVEs. Software engineering recently proposed methods and tools to ease the development process of complex software systems. Among them, Model-Driven Engineering (MDE) considers models as first-class entities. A model is an abstraction of a specific aspect of the system under study for a specific purpose. MDE thus breaks down a complex system into as many models for different purposes, such as: generating code from models; building domain specific programming/modeling languages (DSL); generating tools such as graphical or textual editors. We have leveraged MDE for developing 3D CVEs. We showed how the Collaviz framework took benefits from a DSL we built. The benefits are multiple: 3D CVE designers can focus on the behavior of their virtual objects without bothering with distributed and graphics features; configuring the content of 3D CVEs and their deployment on various software and hardware platforms can be automated through code generation. We detailed the development process we propose and the experiments we conducted on Collaviz .
Mind-Mirror: combining BCI and augmented reality to "see your brain in action in your own head", Anatole Lécuyer, Jonathan Mercier, Maud Marchal
Imagine you are facing a mirror, seeing at the same time both your real body and a virtual display of your brain in activity and perfectly superimposed to your real image “inside your real skull”. We have introduced a novel augmented reality paradigm called “Mind-Mirror” which enables the experience of seeing “through your own head”, visualizing your brain “in action and in situ” . Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements using an optical face-tracking system. The brain activity is extracted and processed in real-time with the help of an electroencephalography cap (EEG) worn by the user. A rear view is also proposed thanks to an additional webcam recording the rear of the user’s head. The use of EEG classification techniques enables to test a Neurofeedback scenario in which the user can train and progressively learn how to control different mental states, such as “concentrated” versus “relaxed”. The results of a user study comparing a standard visualization used in Neurofeedback to our approach showed that the Mind-Mirror could be successfully used and that the participants have particularly appreciated its innovation and originality. We believe that, in addition to applications in Neurofeedback and Brain-Computer Interfaces, the Mind-Mirror could also be used as a novel visualization tool for education, training or entertainment applications.
This work was achieved in collaboration with Fabien Lotte from POTIOC team (Inria-Bordeaux).
Using SSVEP-based BCI with 3D stereoscopic display Anatole Lécuyer
We have investigated the feasibility of dual-frequency Steady-State Visual Evoked Potential (SSVEP) stimulation using a 3-D display and stereoscopic glasses . Dual-frequency stimulation allows for more targets to be created using a small number of frequencies, and stereoscopic vision offers a suitable medium for dual-frequency stimulation as the two views can be controlled independently. Participants were exposed to a repetitive visual stimulus flashing at different frequencies in the left and right views and the electroencephalography (EEG) trace was examined. Our results suggest that the two stimulation frequencies can still be evident in the SSVEP response. In addition, the participant ratings showed no significant differences in fatigue, annoyance, comfort or strangeness of the stimulation compared to conventional forms of stimulation. These results pave the way for further studies using stereoscopic dual-frequency stimulation and its potential for use in virtual reality and 3D videogames
This work was achieved in collaboration with Robert Leeb (EPFL, Switzerland).
Passive BCI and music Anatole Lécuyer
Passive brain–computer interfaces (passive BCI), also named implicit BCI, provide information from user mental activity to a computerized application without the need for the user to control his brain activity. We have proposed an overview of current research on passive BCIs in . We have notably studied how they can be applied to the context of music creation, where they can provide novel information to adapt the music creation process, e.g., exploiting user mental concentration to adapt the music tempo.
Which factors drive successful BCI skill learning? Anatole Lécuyer, Lorraine Perronnet
Brain-Computer Interfaces although very promising, suffer from a poor reliability. Rather than improving brain signal-processing alone, an interesting research direction is to guide users to learn BCI control mastery. Thus, we have inroduced a set of motivational and cognitive factors which could influence the learning process, and which should be considered to improve the global performance of BCI users . We base our approach on Keller’s integrative theory of motivation, volition, and performance, which combines motivational (affective) and cognitive factors, to explain what makes human users learn and perform efficiently, irrespectively of the task. These factors can guide the creation of learning environments, such as BCI training protocols.
This work was achieved in collaboration with Fabien Lotte and Christian Muhl (POTIOC team, Inria-Bordeaux), Moritz Grosse-Wentrup (MPI, Tuebingen), and Reinhold Scherer (TU Graz, Austria).
A methodological framework for applications combining BCI and VE Anatole Lécuyer
We have proposed a user-centred methodological framework to guide design and evaluation of applications combining Brain-Computer Interface (BCI) and Virtual Environment (VE). Our framework is based on the contributions of ergonomics to ensure these applications are well suited for end-users. It provides methods, criteria and metrics to perform the phases of the human-centred design process aiming to understand the context of use, specify the user needs and evaluate the solutions in order to define design choices. Several ergonomic methods (e.g., interviews, longitudinal studies, user based testing), objective metrics (e.g., task success, number of errors) and subjective metrics (e.g., mark assigned to an item) are suggested to define and measure the usefulness, usability, acceptability, hedonic qualities, appealingness, emotions related to user experience, immersion and presence to be respected. The benefits and contributions of our user centred framework for the ergonomic design of applications combining BCI and VE were also discussed.
This work was achieved in collaboration with Fabien Lotte from POTIOC team (Inria-Bordeaux).
Mensia Technologies is an Inria start-up company created in November 2012 as a spin-off of Hybrid team. Mensia is focused on wellness and healthcare applications emerging from the BCI and Neurofeedback technologies. The Mensia startup should benefit from the team’s expertise and of valuable and proprietary BCI research results. Mensia is based in Rennes and Paris. Anatole Lécuyer and Yann Renard (former Inria expert engineer who designed the OpenViBE software architecture and was involved in team projects for 5 years) are co-founders of Mensia Technologies together with CEO Jean-Yves Quentel.
The on-going contract between Hybrid and Mensia started in November 2013 and supported the transfer of several softwares designed by Hybrid team ("OpenViBE", "StateFinder") related to our BCI activity and our OpenViBE software (section ) to Mensia Technologies for 5 years, for future multimedia or medical applications of Mensia.
This on-going contract started in June 2013 and supported the transfer of several softwares designed by Hybrid team ("3D Cursors", "Elastic Images") in the frame of the W3D project to MBA Multimédia company for future applications in the field of multimedia and web design based mainly on HTML5 and Word Press software.
This on-going contract started in June 2013 and supported the transfer of several softwares designed by Hybrid team ("3D Cursors", "Pseudo-haptik", "Elastic Images") in the frame of the W3D project to Polymorph Studio company for future applications in the field of multimedia and web design based mainly on Unity3D software.
This grant started in October 2012 and ended in 2014. It has supported Pierre Gaucher's CIFRE PhD program on "Novel 3D interaction techniques based on pseudo-haptic feedback".
This grant started in January 2011 and ended in 2014. It has supported Fabien Danieau's CIFRE PhD program on "Improving audiovisual experience with haptic feedback".
S3PM is a 3-year project (2013-2016) funded by Labex CominLabs. It involves 3 academic research teams: Medicis (LTSI/Inserm), S4 and Hybrid (IRISA/Inria). S3PM aims at providing specific models, tools and software to create a collaborative virtual environment dedicated to neurosurgery processes using observations of real processes.
HEMISFER is a 3-year project (2013-2016) funded by Labex CominLabs. It involves 4 Inria/IRISA teams (Hybrid, Visages (lead), Panama, Athena) and 2 medical centers: the Rennes Psychiatric Hospital (CHGR) and the Reeducation Department of Rennes Hospital (CHU Pontchaillou). The goal of HEMISFER is to make full use of neurofeedback paradigm in the context of rehabilitation and psychiatric disorders. The major breakthrough will come from the use of a coupling model associating functional and metabolic information from Magnetic Resonance Imaging (fMRI) to Electro-encephalography (EEG) to “enhance” the neurofeedback protocol. Clinical applications concern motor, neurological and psychiatric disorders (stroke, attention-deficit disorder, treatment-resistant mood disorders, etc).
SABRE is a 3-year project (2014-2017) funded by Labex CominLabs. It involves 1 Inria/IRISA team (Hybrid) and 2 groups from TELECOM BREST engineering school. The goal of SABRE is to improve computational functionnalities and power of current real-time EEG processing pipelines. The project will investigate innovative EEG solution methods empowered and speeded-up by ad-hoc, transistor-level, implementations of their key algorithmic operations. A completely new family of fully-hardware-integrated, new computational EEG imaging methods will be developed that are expected to speed up the imaging process of an EEG device of several orders of magnitude in real case scenarios.
CNPAO ("Conservatoire Numérique du Patrimoine Archéologique de l'Ouest") is an on-going research project partially funded by the Université Européenne de Bretagne (UEB). It involves IRISA/Hybrid and CReAAH. The main objectives are: (i) a sustainable and centralized archiving of 2D/3D data produced by the archaeological community, (ii) a free access to metadata, (iii) a secure access to data for the different actors involved in scientific projects, and (iv) the support and advice for these actors in the 3D data production and exploration through the latest digital technologies, modeling tools and virtual reality systems.
b<>com is a French Institute of Research and Technology (IRT). The main goal of this IRT is to fasten the development and marketing of tools, products and services in the field of digital technologies. Our team collaborate with b<>com within two 3-year projects: ImData (on "Immersive Interaction") and GestChir (on "Augmented Healthcare").
CORVETTE (COllaboRative Virtual Environment Technical Training and Experiment) is a 4-year ANR project (2011-2014) led by Bruno Arnaldi. It involves 3 Academic partners (INSA Rennes, ENIB, CEA-List) and 4 Industrial partners (AFPA, Nexter Training, Virtualys, Golaem). CORVETTE aims at designing novel approaches for industrial training (maintenance, complex procedures, security, diagnosis, etc) exploiting virtual reality technologies. The project has three main research axes: collaborative work, virtual human, communication and evaluation. The project seeks to put in synergy: 1) Virtual Human for its ability to embody the user as an avatar and acting as a collaborator during trainingj; 2) Natural communication between users and virtual humans for task-oriented dialogues; and 3) Methodologies in cognitive psychology for the assessment of the effectiveness of the collaboration of users and virtual humans to perform complex cooperative tasks in VR. All these components have been integrated into a unified environment based on an industrial scenario. Several evaluations regarding the different technologies developed in the project have also been achieved.
MANDARIN ("MANipulation Dextre hAptique pour opéRations INdustrielles en RV") is a 4-year ANR project (2012-2015). MANDARIN partners are CEA-List (coordinator), Inria/Hybrid, UTC, Haption and Renault. It aims at designing new hardware and software solutions to achieve natural and intuitive mono and bi-manual dextrous interactions, suitable for virtual environments. The objective of Hybrid in MANDARIN is to design novel multimodal 3D interaction techniques and metaphors allowing to deal with haptic gloves limitations (portability, under-actuation) and to assist the user in virtual reality applications requiring dexterous manipulation. The results will be evaluated with a representative industrial application which is not feasible with currently existing technologies: the bi-manual manipulation of complex rigid objects and cables bundles.
HOMO-TEXTILUS is a 4-year ANR project (2012-2015). Partners of the project are : Inria/Hybrid, CHART, LIP6, TOMORROW LAND, RCP and potential end-user is Hussein Chalayan fashion designer. The objective of HOMO TEXTILUS is to study what could be the next generation of smart and augmented clothes, and their influence and potential impact on behavior and habits of their users. The project is strongly oriented towards human science, with both user studies and sociological studies. The involvement of Hybrid team in the project consists in studying the design of next-gen prototypes of clothes embedding novel kinds of sensors and actuators. Envisionned sensors relate to physiological measurements such as with EEG (electroencephalograpohy and Brain-Computer Interfaces), EMG (muscular activity), GSR (galvanic skin response) or Heart Rate (HR). Envisionned actuators relate to new sensory stimulations such as vibrotactile displays or novel visual (eg LED) displays. These prototypes will thus be used in the various experiments planned in the project.
SIFORAS (Simulation for training and assistance) is a 3-year project (2011-2014) funded by the competitive cluster "Images et Réseaux". SIFORAS involves 4 academic partners (INSA Rennes, ENIB, CEA-List, ENISE) and 9 Industrial partners (Nexter Training, Delta CAD, Virtualys, DAF Conseils, Nexter Systems, DCNS, Renault, SNCF, Alstom). This project consists in developing a pedagogical system for technical training in industrial procedures. It aims at proposing Instructional Systems Design to answer the new objectives of training (Intelligent Tutorial System, mobility, augmented reality, high productivity). The Hybrid implication in the project shares some common means and goals with the Corvette project, in particular concerning its global architecture based on STORM and LORA models, and exploiting GVT software.
Previz is a 3-year project (2013-2016) funded by the competitive cluster "Images et Réseaux". Previz involves 4 Academic partners (Hybrid/INSA Rennes, ENS Louis-Lumière, LIRIS, Gipsa-Lab) and 9 Industrial partners (Technicolor, Ubisoft, SolidAnim, loumasystem, Polymorph). Previz aims at proposing new previsualization tools for movie directors. The goal of Hybrid in Previz is to introduce new interactions between real and virtual actors so that the actor's actions, no matter his/her real or virtual nature, impact both the real and the virtual environment. The project will end up with a new production pipeline in order to automatically adapt and synchronize the visual effects (VFX), in space and time, to the real performance of an actor.
The ADT MAN-IP is a 2-year project (2013-2015) funded by Inria for software support and development. MAN-IP involves two Inria teams: Hybrid and MimeTIC. MAN-IP aims at proposing a common software pipeline for both teams to facilitate the production of populated virtual environments. The resulting software should include functionalities for motion capture, automatic aquisition and modification, and high-level authoring tools.
The ADT OpenViBE-NT is a 3-year project (2012-2015) funded by Inria for support and development of the OpenViBE software (section ). OpenViBE-NT involves four Inria teams: Hybrid, Potioc, Athena, Neurosys - all being extensive users of OpenViBE. OpenViBE-NT aims at improving the current functionalities of OpenViBE platform, and helping in supporting its active and ever growing community of users.
Program: FP7-INFRA
Project acronym: VISIONAIR
Project title: VISION Advanced Infrastructure for Research
Duration: Feb 2011 - Feb 2015
Coordinator: INPG
Other partners: INPG France, University Patras Greece, Cranfield University United Kingdom, Universiteit Twente Netherlands, Universitaet Stuttgart Germany, ICBPP Poland, Univ. Méditerranée France, CNR Italy, Inria France, KTH Sweden, Technion Israel, RWTH Germany, PUT Poland, AMPT France, TUK Germany, University Salford United Kingdom, Fraunhofer Germany, I2CAT Spain, University Essex United Kingdom, MTASEAKI Hungary, ECN France, UCL United Kingdom, Polimi Italy, European Manufacturing and Innovation Research Association
Abstract: Visionair calls for the creation of a European infrastructure for high level visualisation facilities that are open to research communities across Europe and around the world. By integrating existing facilities, Visionair aims to create a world-class research infrastructure for conducting state-of-the-art research in visualisation, thus significantly enhancing the attractiveness and visibility of the European Research Area. Hybrid team is mainly involved in Work Package 9 (Advanced methods for interaction and collaboration) led and supervised by Prof. Georges Dumont (MimeTIC Inria team).
SIMS is an Inria Associate Team involving Mimetic and Hybrid Inria teams in Rennes and the GAMMA Research Group of the University of North Carolina in the United States. SIMS focuses on realistic and effective simulation of highly complex systems based on human movement and interaction. The Associate Team has three main axes of research: crowd simulation, movement planning for autonomous virtual humans and real-time physical simulation for interactive environments. The latter axis is supervised by Maud Marchal. In this context, one Master student spent 8 months in the GAMMA Research Group, starting in November 2013.
Dr. Gerd Bruder, Postdoc at the Universität Hamburg, Germany, spent a half month stay in our group in Rennes in February 2014 to work on locomotion and distance perception in virtual environments, in the frame of EU FP7 "VISIONAIR" project.
Mr. Michael Pereira, PhD student at EPFL, Switzerland, spent a half month stay in our group in Rennes in October 2014 to work on BCI and virtual environments, in the frame of EU FP7 "VISIONAIR" project.
Merwan Achibet
Date: Sep 2014 - Dec 2014
Institution: School of Electro-Communication Tokyo, (UEC), Pr. Kajimoto, Japan
Anatole Lécuyer was General co-Chair of IEEE Symposium on 3D User Interfaces 2014.
Bruno Arnaldi was Chair and Organizer of Journées de l'AFRV 2014, Reims, France
Maud Marchal was Chair and Organizer of a Workshop on Bimanual Haptics at Eurohaptics 2014.
Anatole Lécuyer was Award Chair of Eurohaptics 2014.
Ferran Arguelaguet was Demo Chair of ICAT-EGVE 2014.
Anatole Lécuyer was Program co-Chair of IEEE Symposium on 3D User Interfaces 2014.
Anatole Lécuyer was Member of the conference program committee of Eurohaptics 2014, and IEEE Haptics Symposium 2014.
Bruno Arnaldi was Member of the conference program committee of IEEE Virtual Reality 2014
Maud Marchal was Member of the conference program committee of IEEE Virtual Reality 2014, IEEE Symposium on 3D User Interfaces 2014, Eurohaptics 2014, International Symposium on Biomedical Simulation 2014 and MICCAI Symposium on Deep Brain Stimulation 2014.
Ferran Arguelaguet was Member of the conference program committee of IEEE Virtual Reality 2014, IEEE Symposium on 3D User Interfaces 2014, ACM Virtual Reality Software and Technology 2014 and ACM Symposium on Spatial User Interfaces 2014.
Anatole Lécuyer was reviewer for Eurohaptics 2014, and IEEE Haptics Symposium 2014.
Valérie Gouranton was Reviewer for Conference on Computer Animation ans Social Agents 2014.
Maud Marchal was Reviewer for IEEE Virtual Reality 2014, IEEE Symposium on 3D User Interfaces 2014, Eurohaptics 2014, International Symposium on Biomedical Simulation 2014, MICCAI International Conference 2014, GRAPP 2014, and International Conference on Robots and Intelligent Systems 2014.
Ferran Arguelaguet was Reviewer for EuroHaptics 2014.
Anatole Lécuyer is Associate Editor of Frontiers in Virtual Environments.
Maud Marchal is Associate Editor of Computer Graphics Forum, Review Editor of Frontiers in Virtual Environments and Member of the Editorial Board of Revue Francophone d'Informatique Graphique.
Ferran Arguelaguet is Review Editor of Frontiers in Virtual Environments.
Anatole Lécuyer was Reviewer for ACM Transactions on Applied Perception, and International Journal on Human Computer Studies.
Valérie Gouranton was Reviewer for Revue d'Intelligence Artificielle.
Maud Marchal was Reviewer for IEEE Transactions on Visualization and Computer Graphics and IEEE Transactions on Haptics.
Anatole Lécuyer:
Master MNRV: "Haptic Interaction", 9h, M2, ENSAM, Laval, FR
Ecole Doctorale Matisse : “Haptic Interaction” and “Brain-Computer Interfaces”, 5h, M2-PhD, University of Rennes 1, FR
Master SIBM: "Haptic and Brain-Computer Interfaces", 4,5h, M2, University of Rennes 1, FR
Bruno Arnaldi:
Master INSA Rennes: "VAR: Virtual and Augmented Reality", 12h, M2, INSA Rennes, FR
Master INSA Rennes: "Virtual Reality", 6h, M2, INSA Rennes, FR
Master INSA Rennes: Projects on "Virtual Reality", 20h, M1, INSA Rennes, FR
Valérie Gouranton:
Licence: "Introduction to Virtual Reality", 12h, L2 and responsible of this lecture, INSA Rennes, FR
Licence: Project on "Virtual Reality", 12h, L3 and responsible of this lecture, INSA Rennes, FR
Master INSA Rennes: "Virtual Reality", 16h, M2, INSA Rennes, FR
Master INSA Rennes: Projects on "Virtual Reality", 30h, M1, INSA Rennes, FR
Maud Marchal:
Master INSA Rennes: "Modeling and Engineering for Biology and Health Applications", 48h, M2 and responsible of this lecture, INSA Rennes, FR
Master SIBM: "Biomedical simulation", 3h, M2, University of Rennes 1, FR
Ferran Argelaguet:
Master STS Informatique MITIC: "Techniques d'Interaction Avancées", 26h, M2, ISTIC, University of Rennes 1, FR
HdR : Maud Marchal: "3D Multimodal Interaction with Physically-Based Virtual Environments", Université Rennes 1, November 20th 2014
PhD : Fabien Danieau, "Contribution to the study of haptic feedback for improving the audiovisual experience", Université Rennes 1, February 13th 2014, Supervised by M. Christie, P. Guillotel, J. Fleureau, N. Mollet, and A. Lécuyer
PhD : Anthony Talvas, "Bimanual haptic interaction", INSA Rennes, December 1st 2014, Supervised by A. Lécuyer and M. Marchal
PhD : Thi Thuong Huyen Nguyen, "New 3D techniques for collaborative interaction and navigation that preserve users' immersion", Université Rennes 1, November 20th 2014, Supervised by T. Duval
PhD : Jérôme Ardouin, "Contribution to the study of visualization of real and virtual environments with an extended field-of-view", December 17th 2014, Supervised by M. Marchal, E. Marchand and A. Lécuyer
PhD in progress: Jean-Baptiste Barreau, "Virtual Reality and Archaelogy", Started in February 2014, Supervised by V. Gouranton and B. Arnaldi
PhD in progress: Benoit Le Gouis, "Multi-scale physical simulation", Started in October 2014, Supervised by B. Arnaldi, M. Marchal and A. Lécuyer
PhD in progress: Lorraine Perronet, "Neurofeedback applications based on EEG, fMRI and VR", Started in January 2014, Supervised by C. Barillot and A. Lécuyer
PhD in progress: Jérémy Lacoche, "Plasticity for user interfaces in mixed reality", Started in September 2013 at b<>com Research Institute, Supervised by T. Duval, B. Arnaldi, É. Maisel and J. Royan
PhD in progress: Morgan Le Chénéchal, "Activity and perception for distant collaboration in virtual environments", Started in September 2013 at b<>com Research Institute, Supervised by B. Arnaldi, T. Duval, V. Gouranton and J. Royan
PhD in progress: Lucas Royer, "Visualization tools for needle insertion in interventional radiology", Started in September 2013 at b<>com Research Institute, Supervised by A. Krupa, M. Marchal and G. Dardenne
PhD in progress: Andéol Evain, "BCI-based Interaction", Started in September 2013, Supervised by N. Roussel, G. Casiez, F. Argelaguet and A. Lécuyer
PhD in progress: Guillaume Claude, "Synthesis and Simulation of Process Models ", Started in September 2013, Supervised by V. Gouranton and B. Arnaldi
PhD in progress: François Lehericey, "Collision Detection HPC", Started in October 2013, Supervised by V. Gouranton and B. Arnaldi
PhD in progress: Merwan Achibet, "Dexterous manipulation in virtual environments", Started in November 2012, Supervised by A. Lécuyer and M. Marchal.
PhD in progress: Jonathan Mercier-Ganady, "Hybrid Brain-Computer Interfaces", Started in October 2012, Supervised by M. Marchal and A. Lécuyer
Anatole Lécuyer was Member of Selection committee of Full-time Professor Position at IUT Orsay.
Bruno Arnaldi was Member of Selection committee of Full-time Professor Position at University of Rennes 1 and Member of Selection committee of Assistant Professor at Université Lille 1.
Valérie Gouranton was Member of Selection committee of Assistant Professor at INSA Rennes, Research Fellow (CR2) Inria Rennes Bretagne Atlantique
Maud Marchal was Member of Selection committee of Assistant Professor at Université Lille 1.
Anatole Lécuyer was Member of PhD jury of Huyen Nguyen (INSA de Rennes).
Bruno Arnaldi was Member of PhD jury of Jérôme Ardouin (INSA de Rennes), Antony Talvas (INSA de Rennes) and Huyen Nguyen (INSA de Rennes), Member of HDR Jury of Richard Kulpa (University of Rennes 2).
Maud Marchal was Member of PhD juries of Guillaume Kazmitcheff (Université Lille 1) and Emmanuelle Chapoulie (Université Nice - Sophia Antipolis).
Massive media coverage of the “Mind-Mirror” system : following a press conference and press release organized in May 2014. This ended up with numerous appearances in the media (TV, Press, Radio).
Other regular and numerous appearances of the team results and activity at: French TV (e.g., France5, France 3, France 2) or newspapers and journals (e.g., articles in "Archeologia" (Feb 2014) and "Science et Avenir" (Aug 2014) on "Virtual Reality and Archaelogy").
"Journées Science et Musique" 2014 (Rennes, Oct 2014) : organization of this event, and presentation of several demos of the team in Immersia room, Rennes.
"Journée Nationales d'Archéologie" 2014 (Rennes, June 2014) : organization of this event, and presentation of several demos, Rennes.
Participation at "Forum Libération" (Rennes, April 2014) : presentation of Anatole Lécuyer.
Participation at "LE WEB 2014" Conference (Paris, Dec 2014) : demos of the FLyVIZ system.