The overall objective of Potioc is to design, to develop, and to evaluate new approaches that provide rich interaction experiences between users and the digital world. Thus, we aim at stimulating motivation, curiosity, engagement, or pleasure of use. In other words, we are interested in popular interaction, mainly targeted at the general public.
We believe that such popular interaction may enhance learning, mediation, creation or entertainment that are the main application areas targeted by our project-team. To this end, we explore input and output modalities that go beyond standard interaction approaches which are based on mice/keyboards and (touch)screens. Similarly, we are interested in 3D content that offers new opportunities compared to traditional 2D contexts. More precisely, Potioc explores interaction approaches that rely notably on interactive 3D graphics, augmented and virtual reality (AR/VR), tangible interaction, brain-computer interfaces (BCI) and physiological interfaces.
Such approaches hold great promises in a number of fields. For example, interactive 3D graphics have become ubiquitous in the industry, where they have revolutionized usages, notably by improving work cycles for conception or simulation tasks. However, except for video games, we believe that such approaches are still far from being exploited to their full extent outside such industrial contexts despite having a huge potential for the masses in the areas targeted by our project.
In order to design interactive systems that can be beneficial to many people, and not only expert users, we propose to change the usual design approaches that are generally driven by criteria such as speed, efficiency or precision. Instead, we give more credit to the user experience, in particular criteria such as interface appeal and enjoyment arising from the interface use. Indeed, these criteria have been often neglected in academic research whereas we believe they are crucial for users who are novice with 3D interaction, multisensory spaces, or brain-computer interfaces. An interface with a strong appeal and enjoyment factor will motivate users to use and benefit from the system.
In Potioc, we follow a multidisciplinary approach in order to tackle the problem as a whole, from the most fundamental works on human sensori-motor and cognitive abilities and preferences, to the aspects that are linked to the usage and applications, passing through the technical aspects of the interaction, both at a hardware and software level.
The project of team potioc is oriented along three axes:
Understanding humans interacting with the digital world
Creating interactive systems
Exploring new applications and usages
These axes are depicted in Figure .
Objective 1 is centered on the human sensori-motor and cognitive abilities, as well as user strategies and preferences, for completing interaction tasks. Our contributions for this objective are a better understanding of humans interacting with interactive systems. The impact of this objective is mainly at a fundamental level.
In objective 2, our goal is to create interactive systems. This may include hardware parts where new input and output modalities are explored. This also includes software parts, that are strongly linked to the underlying hardware components. Our contribution in objective 2 is to develop (hardware/software) interaction techniques allowing humans to perform interaction tasks.
Finally, in objective 3, we consider interaction at a higher level, taking into account factors that are linked to specifc application domains and usages. Our contribution in this area is the exploration and the emergence of new applications and usages that take benefit from the developments of the project. With this objective, we target mainly a societal impact.
Of course, strong links exist between the three objectives of the project. For example, the results obtained in objective 1 guide the development of objective 2. Inversely, new systems developed in objective 2 may feed research questions of objective 1. There exists similar links with objective 3.
Our first objective is centered on the human side. Our finality is not to enhance the general knowledge about the human being as a research team in psychology would do. Instead, we focus on human skills and behaviors during interaction processes. To this end, we conduct experiments that allow us to better understand what the users like, where and why they have difficulties. Thanks to these investigations, we are able to design interaction techniques and systems (described in Objective 2) that are well suited to the targeted users. We believe that this fundamental piece of work is the first step that is required for the design of usable popular interactions. We are particularly interested in 3D interaction tasks for which we design dedicated experiments. We also propose a new approach based on physiological and brain (ElectroEncephaloGraphy - EEG) signals for the evaluation of these interactions.
In the scope of the national project InSTInCT (ANR), we have studied how users tend to interact with a touchscreen for interacting with 3D content. Indeed, whereas such kind of interaction has been extensively studied for 2D contexts, it has been little explored in 3D. However, we believe that it is fundamental to understand users' strategies and preferences well in order to promote 3D interaction on touch screens. We conducted a set of experiments to investigate such kind of interaction. We proposed guidelines to help designers in the creation of more user friendly tools. Such kind of study led to the design of tBox. We also conducted experiments to better understand how users manage to control finger pressure, and how they tend to use this input modality. In another work, we have studied the impact of directness when manipulating 3D content on multitouch screens. This allowed us to gain knowledge about users performance in touch-based interaction.
We recently started to explore a new approach to HCI evaluations: using various physiological signals, and notably EEG signals, as a new complementary tool to assess objectively and more precisely the ergonomic quality of a given 3DUI. In particular we aim at using physiological signals to identify where and when the pros and cons of this interface are, based on the user's mental state during interaction. For instance, estimating the user's mental workload during interaction can give insights about where and when the interface is cognitively difficult to use. Such tools could prove very promising to improve evaluations by complementing existing tools (e.g., questionnaires or interviews) that can suffer from reporting bias, can disturb the user, or only provide an a-posteriori global (but undetailled) evaluation of the interaction. So far, we studied the different kinds of mental states that can be estimated from EEG signals and that are valuable for HCI and user evaluations. We also obtained promising first results suggesting that the level of comfort during stereoscopic visualization could be estimated from EEG signals, hence opening the way to faster, more objective and more individualized stereoscopic display design and calibration. Still with the objective of estimating various users' mental states to refine system evaluations and users' understanding, we explored mental stress (a.k.a., mental workload) and social stress (pressure due to a social evaluation) estimation from brain and physiological signals. To this end, we first had to design a protocol to induce mental stress and social stress, which we did successfully. Then, we were able to calibrate stress recognition from EEG and physiological signals as well as to assess the accuracy of the stress estimators. Then, we managed to robustly estimate mental stress levels from EEG and physiological signals (EEG being the most robust modality), even accross different contexts, here accross different levels of social stress. This is an interesting step towards robust estimation of mental stress in realistic conditions. Finally, we also studied and reviewed emotion recognition from EEG signals, which, again, is another interesting mental state to consider during an HCI evaluation.
Finally, we also studied how humans interact with a specific HCI: Brain-Computer Interfaces (BCI). Indeed, although EEG-based BCIs are very promising for numerous applications, e.g., rehabilitation or gaming, they mostly remain prototypes not used outside laboratories, due to their low reliability. Poor BCI performances are partly due to imperfect EEG signal processing algorithms but also to the user, who may not be able to produce reliable EEG patterns. Indeed, BCI use is a skill, requiring the user to be properly trained to achieve BCI control. If he/she cannot perform the desired mental commands, no signal processing algorithm could identify them. Therefore, rather than improving EEG signal processing alone (which is what most current BCI research is about), we proposed to also guide users to learn BCI control mastery. We actually studied some theoretical models and guidelines from psychology and cognitive sciences about human learning, which revealed the many theoretical limitations of current standard BCI training approaches. We also conducted some actual experiments to further illustrate some limitations of current BCI training protocols and try to understand and analyse them. Finally, we explored new feedback types and new EEG visualization techniques in order to help users to learn BCI control skills more efficiently. These new feedback and visualizations notably aim at providing BCI users with more information about their EEG patterns using, in order to identify more easily relevant BCI control strategies, as well as motivating and engaging them in the learning task. This was achieved using augmented reality displays of the activity on the whole cortex - using an approach entitled the "Mind-Mirror", or by using multiplayer video game-based BCI training. Overall, this line of research seem largely unexplored but promising, and we are currently investing increasingly more research efforts into it.
Our objective here is to create interactive systems and design interaction techniques dedicated to the completion of interaction tasks. We divide our work into three main categories:
Interaction techniques based on existing Input/Output (IO) devices.
New IO and related techniques.
BCI and physiological computing.
When using desktop IO (i.e., based on mouse/keyboards/monitors), a big challenge is to design interaction techniques that allow users to complete 3D interaction tasks. Indeed, the desktop IO space that is mainly dedicated to the completion of 2D interaction task is not well suited to 3D content and, consequently, 3D user interfaces need to be designed with a great care. We have proposed a state of the art that describes the major approaches and techniques in this area. In the past few years, we have been particularly interested in the problem of interaction when the 3D content is displayed on a touchscreen. Indeed, standard (2D) HCI has evolved from mouse to touch input, and numerous research projects have been conducted. At the opposite, in 3D, very little work has been proposed. We have contributed to move desktop 3D UIs from the mouse to the touch paradigm; what we used to do with mice in front of a screen does not work well on touch devices anymore. To face this problem, we have focused on touch-based 3D UIs. The first work brought tBox , a new 3D transformation widget designed for manipulating 3D objects on touch screens. In a second work, we have explored several strategies for navigating in 3D digital cities from touch inputs in collaboration with our industrial partners Vectuel and Mappy/PagesJaunes.
In Potioc, we are interested in exploring new IO modalities that may make interaction easier, more engaging and motivating. In the past few years, we have designed new interactive systems that exploit unconventional IO modalities. Stereoscopic visualization has a great potential for the understanding of 3D content. On the other hand, interaction with such stereoscopic environments is generally diffcult. To face this problem, we have conceived Toucheo, a new system that exploits stereoscopic visualization and touch input. We have also contributed to the design of a system that exploits 3D spatial and touch input in a stereoscopic 3D environment. In the scope of immersive VR, we have also proposed some extensions of the current IO space. In particular, we presented a new input device that has been specifically designed to play music in an immersive VR environment. It mixes graphical and percussion based interaction. Another example is the SIMCA project where we have build a gateway simulator composed of numerous screens, video projectors and tracking systems. Tangible interaction has also been a subject of interest for us. Indeed, we believe that manipulating directly physical objects for interacting with the digital world has a great potential, in particular when the general public is targeted. In this direction, we have notably proposed PapARt, a system that mixes physical drawing and augmented reality. With this system, the computer disappears, and the user interacts with the digital content as he or she would do with physical content. Another example is Rouages where musicians play with physical midi instruments that are augmented with virtual information to provide rich experiences to the audience. Our more recent contribution is Teegi, a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact.
As part of our research on the design of interactive systems based on physiological signals, and in particular brain signals (for BCI design) we conducted a number of research projects on EEG signal processing and classification. Indeed, in order to design practical BCI that can be used outside the lab, there is a need for robust EEG signal processing algorithm with the long-term objective to correctly recognise the users' mental commands (and thus EEG patterns) anytime and anywhere. To do so, we first explored and designed new features to represent EEG signals. We notably explored multifractal cumulants and predictive complexity features, waveform length features with an optimal spatial filter that we designed, as well as phase-locking value features (i.e., functional connectivities between brain areas), also with an optimal spatial filter we designed. All such features proved useful to classify EEG signals, and, more importantly, increased BCI classification performances (by 2 to 4% on average) when combined with the gold standard features, namely, band power features. To make BCI more robust to noise and non-stationarities, we proposed to integrate a-priori knowledge into machine learning algorithms. Such knowledge represents any information we have about what should be a good filter or classifier for instance. We successfully demonstrated this approach to learn robust and stable spatial filters. Finally, we worked on reducing the long and tedious BCI calibration times, by making the design of a BCI possible from very few training EEG signals. To do so, we proposed to generate artificial EEG signals from the few EEG trials initially available, in order to augment the training set size in a relevant way. This enabled us to calibrate BCI systems with 2 to 3 times less data than standard designs, while maintaining similar classification performances, hence effectively reducing the calibration time by 2 or 3.
Objective 3 is centered on the applications and usages. Beyond the human sensori-motor and cognitive skills (Objective 1), and the hardware and software components (Objective 2), Objectives 3 takes into account broader criteria for the emergence of new usages and applications in various areas, and in particular in the scope of learning, popularization of science, art and entertainment. Our goal here is not to develop full-packaged end-user applications. Instead, our contribution is to stimulate the evolution of current applications with new engaging interactive systems.
In the scope of popularization of science, we have built a strong partnership with Cap Sciences, which is a center dedicated to the popularization of science in Bordeaux that is visited by thousands of visitors every month. This was initiated with the ANR national project InSTInCT, whose goal was to study the benefits of 3D touch-based interaction in public exhibitions. This project has led to the creation of a Living Lab where several systems developed by Potioc are tested by the visitors. This provides us with interesting feedback that goes beyond the feedback we can obtain in our controlled lab-experiments. In the scope of archeology, we also contributed to a new system dedicated to public exhibitions, and we collected the current work around the world in this area in a dedicated special issue of a journal. We also contributed to an experiment at "Palais de la découverte" in Paris, where hundreds of visitors have experimented with PapARt (Figure ).
In the scope of education, we are currently collaborating with Stṕhanie Fleck from Université de Lorraine for exploring new interactive systems that enhance learning processes. Furthermore, we have launched a project with colleagues in the scope of teaching optical phenomena in optics. Our project HOBIT aims at developing an Hybrid Optical Bench for Innovative Teaching.
In the scope of Art, we are convinced that the work that is conducted in Potioc may benefit to creation from the artist point of view, and it may open new interactive experiences from the audience point of view. We have conducted work with colleagues who are specialists in digital music, and with musicians. This led to several scientific publications and live artistic performances. We have also worked with an architect in order to explore neurodesign, i.e., the use of neural signals for design, here for the design of artistic shapes. Furthermore, we continued exploring the artistic domain in the scope of interactive juggling.
In the scope of entertainement, we notably explored BCI-based gaming and non-medical applications of BCI. In particular, we studied and analyzed how BCI could be used as a control channel for virtual reality and gaming applications, as well as the pros and cons of BCI-based gaming. We also proposed and studied a multiplayer BCI-based game. Our work so far suggests that BCI-based gaming and virtual reality applications are feasible and promising, but that many research challenges are still to be overcome for widespread use. In another example in the field of entertainment we studied several input modalities for playing a game in mobile AR.
Or project aims at providing 3D digital worlds to all, including the general public, to stimulate understanding, learning, communication and creation. Our scope of applications encompasses
popularization of science
education
art
entertainment
See "Objective 3: Exploring new applications and usages" for the detailed description .
As part of his thesis work, Jérémy Laviole has developed a software suite for PapARt : Paper Augmented Reality Toolkit. This work is being extended to become a state-of-the-art library for projection mapping (spatial augmented reality) and tangible interfaces.
Papart is a Processing
Augmented reality rendering that enables rendering for cameras and projectors.
Tracking for Augmented Reality : tracking from ARToolkitPlus
Camera support : in addition to the Processing Video library, PapARt support for video from OpenCV, OpenKinect, FFMPEG, FlyCapture and more is available from JavaCV
"Tactile" input on planar surface : Touch and hovering can be detected by a depth camera such as Kinect
Software infrastructure to create "paper touch screens", following Processing's methods to create drawings and interactive experiences.
Technical challenges for the next few years :
Color camera, depth camera and projector calibration made easy and more automated.
Software and hardware installation of such cameras documented with tutorials and technical advices.
Research questions and challenges :
Creation of tangible interfaces, tangibles elements can be tracked from cameras and depth cameras.
Capture of part of pieces of paper for image analysis. E.g. to analyse and monitor drawings.
Interactive projection mapping is an active research field, and such tools could power new research projects.
website: http://
As part of our research work on BCI, we contribute to the development of the OpenViBE
Acceptance of the ANR project "ISAR" (Interacting with Spatial Augmented Reality) lead by Martin Hachet (Potioc)
Publication of "Teegi" (Tangible EEG Interface) at UIST14 and more than 13000 views on vimeo until December 2014 (http://
Typical brain activity visualization tools are usually hard to understand and interpret for novice users. With advances in neurotechnologies (notably BCI) and HCI/AR, we explored the design of new ways to visualize our own brain activity in real-time, for which we proposed two new systems.
We designed Teegi, a Tangible EEG Interface that enables novice users to get to know more about something as complex as brain signals, in an easy, engaging and informative way . To this end, we have designed a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies (see Figure ). With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. Users can also reveal some specific EEG phenomenons (e.g., sensorimotor rhythms) still using a tangible approach by placing dedicated "mini-teegi" (small pupets) in a designated area on the interaction zone. The whole system has been designed with educational psychology tools in mind to ensure an efficient learning. An explorative study has shown that interacting with Teegi seems to be easy, motivating, reliable and informative. Overall, this suggests that Teegi is a promising and relevant training and mediation tool for the general public.
In addition, together with colleagues from Inria Rennes (team Hybrid), we introduced a novel augmented reality paradigm called ”the Mind-Mirror” which enables such an experience of seeing ”through your own head”, visualizing your brain ”in action and in-situ” . Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements thanks to an optical face-tracking system. The brain activity is extracted and processed in real-time thanks to an EEG cap wore by the user. A rear view is also proposed thanks to an additional web-cam recording the rear of user’s head (see Figure ).
Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this project we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi- touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations. This work has been presented at the ACM Symposium on Spatial User Interaction 2014 .
In this project we have furthermore evaluated controls based on Augmented Reality (AR), Tilt and Touch for a Point and Shoot Mobile Game (see Figure ). A user study (n=12) was conducted to compare the three controls in terms of player experience and accuracy. Tilt and AR controls provided more enjoyment, immersion and accuracy to the players than Touch. Nonetheless, touch caused fewer nuisances and was playable under more varied situations. Despite the current technical limitations, we suggest to incorporate AR controls into the mobile games that supported them. Nowadays, AR controls can be implemented on handheld devices as easily as the more established Tilt and Touch controls. However, this study is the first comparison of them and thus its findings could be of interest for game developers. This work has been presented at ISMAR - MASH'D .
Spatial Augmented Reality (SAR) opens interesting perspectives for new generations of mixed reality applications. Compared to traditional HCI contexts, there is little work that studies user performance in SAR. We did an experiment that compared pointing in SAR versus pointing in front of a screen using standard pointing devices (mouse and tablet). The results showed that the users tend to interact in SAR in a way that is similar to the screen condition, without a big loss of performance.
In a near future scenario, we will replace some of our everyday objects with counterparts in form of Computational Objects (COs). COs look similar to the original object; however, inside them there are input sensors, output devices such as displays and a CPU. Furthermore, COs still convey the context and meaning that the original object had. For instance, a clock is associated with time and thus users could expect its CO version to display time-related data. We suggest that any user should be able to easily code new appearances and behaviors for his or her own objects. Using creative coding as a base, we propose to add the notions of affordances and conventions to this programming context. Moreover, we suggest that COs could be used as a creativity support tool although modifying their behavior beyond conventions could confuse the user. Finally, we reckon that with the proper tools, users could also make physical modifications to COs. For example, a retractile cord can be attached to the clock and be used to pull data out and display them in a linear layout.
Physiological sensors are not limited to research and medical facilities anymore. They are getting more and more affordable and they become widely accessible to users, as denoted by the popularity of smartphone apps and wearables that track heart rate during fitness activities. Before long, we may see a wide range of sensors embedded into consumer electronic devices. This trend has already started with the arrival of "smartwatches" that could – among other things – detect users' heart beats covertly.
We anticipated this opportunity in order to increase engagement in human-computer interaction, more specifically in human-agent interaction. In we demonstrated that we could increase the social presence of embodied agents – that is, of virtual beings – by simply mirroring the heart rate of users. The "similarity-attraction" effect induces positive emotions toward persons or things that look like us or react as we do. An agent that is associated to a heart beat at the same pace as the user is found more sympathetic. The "similarity-attraction" effect, applied to physiological computing, could help with little effort to improve the acceptance of embodied agents and robots by the general public. (See Figure for setup).
Furthermore, we have taken advantage of physiological sensors in order to evaluate different sorts of human-computer interfaces prior to their release. First, we showed that we could reliably estimate the user's mental workload levels from his/her EEG signals, across different contexts involving different levels of social stress . Then, based on those results, we used a combination of electrocardiography (measure of heart beats), galvanic skin response (measure of sweat on the skin) and electroencephalography (EEG, measure of brain activity) to assess the workload of users during 3D manipulation tasks. The first preliminary results seem to indicate that we might be able to discriminate the parts of the interaction that provokes a high cognitive stress, hence that needs to be improved. This work is in line with the evaluation of visual comfort. We presented earlier this year a pilot study documenting how different virtual depths could cause different levels of discomfort , and how this discomfort translates to EEG activity.
Pervasive technologies and physiological computing may be a key component to bridge the gap that too often keeps dividing machines and general public. We believe that it'll help make computers more enjoyable and more usable.
While recent research on Brain-Computer Interfaces (BCI) has highlighted their potential for many applications, they remain barely used outside laboratories due to a lack of robustness. Spontaneous BCI (i.e., mental imagery-based BCI) often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable EEG patterns (spontaneous BCI control being widely acknowledged as a skill) while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the humans in the loop.
Indeed, even if it has been advocated in one of our previous publications (see activity report 2013) that current human training approaches for spontaneous BCI are most likely inappropriate, based on theoretical models, we still needed practical confirmations that users' modest performances at controlling a BCI could be partly due to these inappropriate training protocols. Thus, in our work, we proposed to study standard BCI training protocols without EEG signals, i.e., without a BCI . In particular, we studied how people could learn to do two simple motor tasks using the same training tasks and feedback as the one given to motor imagery BCI users. More precisely, we asked subjects to learn to draw on a graphic tablet a triangle and a circle (the correct size, angles and speed of drawing of these two shapes being unknown to the subject) that can be recognized by the system, using a synchronous training protocol and an extending bar as feedback, like for motor imagery based BCI training. Our results show that most subjects (out of N=20 subjects) improved with this feedback and practice (i.e., the shapes they drew were increasingly more accurately recognized by the system), but that 15% of them completely failed to learn how to draw the correct shapes, despite the simplicity of the motor tasks. This suggests that part of BCI illiteracy/deficiency is likely due to the training protocols currently used.
From the huge variability in users' performances at BCI mastery emerged the following question: Why do some people manage to learn using these protocols and others do not? Our hypothesis here was that these protocols are not adapted to some users' profiles. Thus, we designed an experiment in which we looked for correlations between the personality and cognitive profile of the users and their ability to learn to control a MI-BCI. Our current results (N=18) show that 1) performances are strongly correlated with users' spatial abilities and 2) we can reliably predict these performances using a model including different psychological factors (like abstractedness, self-reliance or tension). These results are very encouraging as they could lead to reflections about 1) exercices to improve users' spatial abilities and 2) solutions to take into account users' cognitive and personality profiles in BCI training approaches.
Furthermore, it is more and more claimed that visual feedback is not ideal for BCIs as they are conceived for interaction situations in which the visual channel is often overtaxed. Thus, tactile feedback might appear to be more relevant. In order to test this hypothesis, we proposed a study aiming at comparing a standard visual feedback with an equivalent tactile feedback in an appealing training environment containing visual distractors (to mimic an interaction context in which the visual channel is overtaxed). Users had to learn to perform motor-imagery tasks as well as a counting task, and received either a visual or vibrotactile feedback (see Figure ). Our main result (N=18) is the fact that people receiving tactile feedback perform significantly better (at Motor-Imagery and counting task). This kind of result should encourage the BCI community to replace standard BCI protocols by more motivating training environments and multimodal feedback.
Still regarding the feedback, we explored what kind of information could help the user to perform better mental imagery tasks. As such, we look for physiological features that could predict whether a mental task will be correctly recognized by the BCI, and that could be understood by the user. Among the different features we explored, it appears that the user's relaxation (from a muscular point of view), as measured in EMG activity collected by EEG channels, is one of such features. We are currently building and exploring new BCI training protocols that provide additional information about the user's muscular relaxation as complementary feedback.
Spatial filters are powerful tools for EEG classification for BCI design, able to reduce spatial blurring effects. In particular, optimal spatial filters have been designed to classify EEG signals based on band power features. Unfortunately, there are other relevant EEG features for which no optimal spatial filter exists. This is the case for Phase Locking Value (PLV) features, which measure the synchronization between 2 EEG channels. Therefore, we proposed to create such a pair of optimal spatial filters for PLV-features . To do so, we optimized a functional measuring the discriminability of PLV-features based on a genetic algorithm. An evaluation of our algorithm on a motor imagery EEG data set showed that using optimized spatial filters led to higher classification performances, and that combining the resulting PLV features with traditional methods boosts the overall BCI performances.
We also wrote a chapter that is an introductory overview and a tutorial of signal processing techniques that can be used to recognize mental states from EEG signals in BCI . More particularly, this chapter presented how to extract relevant and robust spectral, spatial and temporal information from noisy EEG signals (e.g., Band Power features, spatial filters such as Common Spatial Patterns or xDAWN, etc.), as well as a few classification algorithms (e.g., Linear Discriminant Analysis) used to classify this information into a class of mental state. It also briefly touched on alternative, but currently less used approaches.
This project was part of "Villes transparentes" research project in collaboration with Mappy (Solocal group) and Vectuel - VirtuelCity initiated in 2013. It aimed at characterizing today's most common interaction techniques for street-level navigation in 3D digital cities, for mobile touch devices, in terms of their efficiency and usability. To do so, we conducted a user study, where we compared target selection (Go-To), rate control (Joystick), position control, and stroke-based control navigation metaphors (see Figure ). The results suggest that users performed best with the Go-To interaction technique. The subjective comments showed a preference of novices towards Go-To and expert users towards the Joystick technique. This work has been published at the 3DUI 2014 conference .
Interactive Collaboration in Virtual Reality for Aerospace Scenarii:
duration: 2014-2017
PhD Thesis of Damien Clergeaud
partners: Airbus Group Innovations, Airbus Defence & Space
The objective of this work is to explore the problems of remote collaboration in the context of virtual reality for aerospace applications. It relates to an interaction between an immersed user and remote operators equipped with various communication tools (desktop computers, tablets, touch tables...) or an interaction between a user and a remotely operated robot.
Cap Sciences:
Potioc has strong relationships with the Cap Sciences museum (http://
Immersion:
Potioc has strong relationships with Immersion. In 2014, Immersion and Potioc notably co-supervised a Master student (Dennis Wobrock) on the topic "Using brain and physiological signals to assess 3D User Interfaces".
ANR Project ISAR:
duration: 2014-2017
coordinator: Martin Hachet
partners: LIG-CNRS (Grenoble), Diotasoft (Paris)
acronym: Interaction en Réalité Augmentée Spatiale / Interacting with Spatial Augmented Reality
The ISAR project (Interaction with Spatial Augmented Reality) focuses on the design, implementation, and evaluation of new paradigms to improve interaction with the digital world when digital content is directly projected onto physical objects (e.g. a ball on the figure). It opens new perspectives for exciting tomorrow’s applications, beyond traditional screen-based applications.
website: http://
Inria ADT OpenViBE-NT:
duration: 2012-2014
partners: Inria teams Hybrid, Neurosys and Athena
coordinator: Anatole Lécuyer (Inria Rennes Bretagne Atlantique)
funded by Inria (Technological Development Project)
The aim of this project is to further develop OpenViBE, notably in order to (1) make the software evolve towards a new version that fits better current and future needs from its users, (2) to offer new and original functionalities and (3) to keep ensuring OpenViBE support and dissemination. The final objective is to further increase OpenViBE usability and appeal, in order to strengthen the users’ community surrounding the software and enable us to make it as viable and useful as possible, on the long term. The developments will also enable the Inria teams involved (Potioc, Hybrid, Neurosys and Athena) to explore new research directions on BCI, such as adaptive BCI, hybrid BCI, that combines EEG with other physiological sensors (e.g., heart rate, galvanic skin response, gaze, etc.), or new coupling between BCI and virtual reality in order to improve human training for BCI, thanks to new immersive feedback types.
website: http://
Inria ADT OpenViBE-X:
duration: 2014-2016
partners: Inria teams Hybrid and Athena
coordinator: Maureen Clerc (Inria Sophia Antipolis)
This is the follow-up project of OpenViBE-NT
website: http://
Inria Project Lab BCI-LIFT:
partners: Inria team Athena (Inria Sophia-Antipolis), Inria team Hybrid (Inria Rennes), Inria team Neurosys (Inria Nancy), LITIS (Université de Rouen), Inria team DEMAR (Inria Sophia-Antipolis), Inria team MINT (Inria Lille), DyCOG (INSERM Lyon)
coordinator: Maureen Clerc (Inria Sophia Antipolis)
Project around BCI in the evaluation process, first meeting with all the partners was in October 2013
The aim is to reach a next generation of non-invasive Brain-Computer Interfaces (BCI), more specifically BCI that are easier to appropriate, more efficient, and suit a larger number of people. With this concern of usability as our driving objective, we will build non-invasive systems that benefit from advanced signal processing and machine learning methods, from smart interface design, and where the user immediately receives supportive feedback. What drives this project is the concern that a substantial proportion of human participants is currently categorized “BCI-illiterate” because of their apparent inability to communicate through BCI. Through this project we aim at making it easier for people to learn to use the BCI, by implementing appropriate machine learning methods and developping user training scenarios.
website: http://
AIBLE-Helios:
duration: 2014-2015
partners: SATT Nancy Grand Est, Université de Lorraine
coordinator: Stṕhanie Fleck (Université de Lorraine)
The AIBLE project (Augmented, Inquiry-Based, Learning, Environment) aims to provide a methodology and innovative media for the improvement of learning of basic astronomical phenomena for school groups (8-11 years). As part of this project, Potioc will focus on the development of the final application for augmented reality based and 3D manipulation, for providing a high-fidelity prototype.
PIA ville numérique "Villes transparentes":
duration: 2012-2014
partners: Pages Jaunes/Mappy, Vectuel/Virtuelcity
In the context of the call for proposal Ville numérique (Digital city) by the Investissement d’Avenir Program, the Potioc team was selected for the project “Villes transparentes” (Transparent city) in collaboration with Mappy (Pages Jaunes group) and Vectuel. In this project of a duration of two years, the Potioc team focused on the development of innovative interaction techniques for the navigation in urban 3D environments.
DRAO:
duration: 2012-2014
partners: Inria teams Reves, manao, In-Situ
ANR Young Researcher Program (Adrien Bousseau, Reves team)
DRAO is a research project dedicated to the creation of drawing. Its first focus is on the understanding of how people draw through studies and interviews with professionals.The second goal is the automation of some parts of the drawing process. Finally, the third goal is the creation of tools to teach drawing with digital tools.
website: https://
Interco3D:
partners: IRIT Toulouse
Recognized as official working group by AFIHM
The objective of this working group is to unite a community of actors involved in the design and use of interaction techniques for 3D spaces, ie perceive, understand, manipulate and move within virtual 3D spaces.
website: http://
Program: DGA-DSTL Project
Project title: Assessing and Optimising Human-Machine Symbiosis through Neural signals for Big Data Analytics
Duration: 2014-2018
Coordinators: Ulster University (Nothern Ireland, UK), Inria Bordeaux Sud-Ouest (France)
Abstract: This project's objective is to design new tools for Big Data analysis, and in particular visual analytics tools that tap onto human cognitive skills as well as on Brain-Computer Interfaces. The goal is to enable the user to identify and select relevant information much faster than what can be achieved by using automatic tools or traditional human-computer interfaces. More specifically, this project will aim at identifying in a passive way various mental states (e.g., different kinds of attention, mental workload, relevant stimulus perception, etc.) in order to optimize the display, the arrangement or the selection of relevant information.
Collaboration with the University of Bristol, BIG (UK), Bristol Interaction and Graphics (BIG) group, UK (Head: Pr. Sriram Subramanian)
We have strong relationships with Sriram Subramanian. This has led to joint paper publications, numerous visits and a co-supervision of a PhD thesis (Camille Jeunet)
Bordeaux Idex project "Conception de Système d'interfaces cerveau-ordinateur prenant en compte les facteurs humains afin d'optimiser l'apprentissage de l'utilisateur" for international PhD project
partners: Bordeaux Segalen University (Handicap & Système nerveux team), Bristol University (BIG team)
duration: October 2013 - September 2016
LIRA Stress and Relaxation project: LIfe-style Research Association, Lifestyle Management: Stress and Relaxation
Accord cadre Européen
Coordinator: Frederic Alexandre
Other partners: Philips (Netherlands), Fraunhofer (Germany), Inria teams Hybrid and Mimetic
Abstract: The Stress and Relaxation project aims at offering services to a user, at home or at work, to help this user evaluate and control his level of stress
duration: 2011 - 2021
Pr. Roger N'KAMBOU, department of Computer Sciences at the UQAM (Université du Québec à Montréal) who is a specialist of Intelligent Tutoring Systems (ITS). We are setting up a collaboration with him to develop such a system in order to optimise human learning in Brain-Computer Interfaces (BCI), and thus improve the performances with such systems. We visited Pr. N'Kambou and UQAM in May in Montreal, and he visited us at Inria in December, where we organized a Workshop on human learning and computer sciences.
We are collaborating with Dr. Cuntai Guan (I2R, Singapore), Pr. Jonathan Bromberg (Kansas University, USA) and Pr. Gerwin Schalk (Wadsworth center, USA) on ElectroCorticoGraphic (ECoG) signal analysis.
This year, the Potioc team has hosted two international PhD students :
Flavio Bertini, University of Bologna, Italy (December 2013-February 2014)
Nicholaos Katzakis, Osaka University, Japan (September 2014 until November 2014)
Potioc has also hosted an international Master student :
Julia Schumacher, Technische Universitaet Berlin, Germany (April 2014 - October 2014)
Camille Jeunet was working at the University of Bristol, UK, in the BIG (Bristol Interaction and Graphics) groups of Pr. Sriram Subramanian, from July to September 2014.
IEEE VR 2015 Lab and project presentation Chair (M. Hachet)
IEEE 3DUI (M. Hachet)
first OpenViBE workshop, September 2014, satellite workshop of the 6th International BCI conference, Graz, Austria, co-organized with Inria teams Athena, Hybrid, Neurosys and with the company Mensia Technologies (F. Lotte)
BCI Workshop for art, Academy of Media Arts Cologne (KHM), Germany, February 2014 (C. Múhl)
Interco3D Workshop at IHM14 (M. Hachet and F. Lotte)
Workshop «Perspectives on Gender and Product Design» at CHI 2014 (A. Brock)
"Design of BCI based on Oscillatory activity: signal processing and more", International BBCI Winter School on Neurotechnologies, Berlin, Germany, February 2014 (F. Lotte)
"EEG Signal Processing and Classification for Brain Computer Interfacing (BCI) Applications", with A.Konar and A. Sinharay , ICASSP 2014, , Florence, Italy, May 2014 (F. Lotte)
6th International BCI conference (F. Lotte)
CHI 2015 Program Committee (M. Hachet)
ITS 2014 Poster Committe (A. Brock)
MobileHCI 2014 Poster Committe (A. Brock)
PRNI 2014 (F. Lotte)
SUI 2014 (M. Hachet)
Web3D 2014 (M. Hachet)
Best talk and poster award committee, 6th International BCI Conference, Graz, Austria (F. Lotte)
PhD Thesis award 2014 from AFIA (French Association for Artificial Intelligence) (F. Lotte)
Best Poster award committee, SUI 2014, Honululu, USA (M. Hachet)
CHI 2014 (A. Brock, M. Hachet, F. Lotte)
CHI 2015 (A. Brock, M. Hachet, F. Lotte)
EICS 2014 (A. Brock)
Eurohaptics 2014 (A. Brock)
ICASSP 2014 (F. Lotte)
IEEE SMC 2014 (F. Lotte)
IHM 2014 (A. Brock, F. Lotte)
International BCI conference 2014 (F. Lotte)
ITS 2014 (A. Brock)
MobileHCI 2014 (A. Brock)
NordiChi 2014 (A. Brock)
PRNI 2014 (F. Lotte)
UIST 2014 (A. Brock, M; Hachet, F. Lotte)
journal ACM JOCCH Guest editor, Special issue on « Interacting with the past » 2014 (M. Hachet)
Brain-Computer Interfaces journal Guest editor, Special issue on “Affective Brain-Computer Interfaces”, 2014 (C. Múhl)
ACM TOCHI (F. Lotte)
BCI (F. Lotte)
Behaviour & Information Technology (A. Brock)
IEEE CG&A (F. Lotte)
IEEE Trans. Affective Computing (F. Lotte)
IEEE Trans. Biomed. Eng. (F. Lotte)
IEEE Trans. on Computational Intelligence and AI in Games (J. Frey)
IEEE Trans. Cybernetics (F. Lotte)
IEEE Trans. on Haptics, Special Issue: Haptic Assistive Technology for Individuals who are Visually Impaired (A. Brock)
IEEE Trans. Human Machine Systems (F. Lotte)
International Journal of Human-Computer Studies (A. Brock)
JMIHI (F. Lotte)
J. Neural Eng. (F. Lotte)
Pattern Recognition (J. Frey)
PLOS-One (F. Lotte)
Presence (F. Lotte)
Proceedings of the IEEE (F. Lotte)
Bordeaux University
Master : Jérémy Laviole, Experimentation and Development projects , 5h, M2, University of Bordeaux, France
Master : Jérémy Laviole, Programmation project , 5h, M1, University of Bordeaux, France
Master : Jérémy Frey, Programmation project , 27h, M1, University of Bordeaux, France
Master : Anke Brock, Virtual Reality and 3D Interaction, 7,5h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Martin Hachet, Virtual Reality and 3D Interaction, 12h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Fabien Lotte, Virtual Reality and 3D Interaction, 5h, M2 Cognitive Science, University of Bordeaux, France
Master : Jérémy Laviole, Initiation to Research, 5h, M2, Bordeaux University, Bordeaux, France
Licence : Camille Jeunet, Knowledge and Representation, 16h, L3 Mathematics and Informatics applied to Human and Social Sciences, University of Bordeaux, France
Licence : Camille Jeunet, Human Sciences and Methods, 16h, L1 Mathematics and Informatics applied to Human and Social Sciences, University of Birdeaux, France
Licence : Camille Jeunet, Cognitive Psychology, 16h, L1 Psychology, University of Bordeaux, France
Licence : Camille Jeunet, Scientific Methodology, 16h, L2 Psychology, University of Bordeaux, France
Enseirb MatMéca
Master : Anke Brock, Video Games and Interaction, 12h eqtd, 3nd year (M2), Enseirb, Bordeaux, France
Master : Martin Hachet, Video Games and Interaction, 9h eqtd, 3nd year (M2), Enseirb, Bordeaux, France
Master : Jérémy Laviole, Project Multimedia Analysis, 16h, 3nd year (M2), Enseirb, Bordeaux, France
Optical Institute Graduate School
Master : Jérémy Laviole, High Performance Programming, 20h, 2nd year (M1), Optical Institute Graduate School, Bordeaux, France
Master : Jérémy Laviole, Image Analysis, 20h, 2nd year (M1), Optical Institute Graduate School, Bordeaux, France
Other Universities
Master : Anke Brock, Human-Computer Interaction, 12h eqtd, 2nd year (M2SIR), University Toulouse, France
Master: Fabien Lotte, Virtual Reality, Accesibility and Brain-Computer Interaction, 4h, M1/M2 level, ENSSAT Lannion, France
PhD Students
Damien Clergeaud, "Interactive Collaboration in Virtual Reality for Aerospace Scenarii", started November 1st, 2014, Pascal Guitton
Jérémy Frey (PhD candidate in Computer Science, Bordeaux University), “Using Passive Brain-Computer Interfaces to assess and optimize 3D User Interfaces”, started October 1st, 2012, Fabien Lotte and Martin Hachet
Renaud Gervais, "Organic User Interfaces", started December 1st, 2012, Martin Hachet
Camille Jeunet (PhD candidate in Cognitive Sciences, Bordeaux University), “Improving User training approaches for Brain-Computer Interface", started October 1st, 2013, Fabien Lotte, Martin Hachet, Bernard N'kaoua and Sriram Subramanian.
Joan Sol Roo, "Interaction with Spatial Augmented Reality", started November 1st 2014, M. Hachet
Lorraine Perronet (PhD candidate in Computer Science, Rennes University): “Neurofeedback and brain Rehabilitation based on EEG and fMRI”, 2014-2017 (expected), Fabien Lotte co-supervising with Anatole Lécuyer, Christian Barillot and Maureen Clerc
Stephanie Lees (PhD candidate in Computer Science, Ulster University, UK): “Assessing and Optimising Human-Machine Symbiosis through Neural signals for Big Data Analytics”, 2014-2018 (expected), Fabien Lotte co-supervising with Damien Coyle, Paul McCullagh and Liam Maguire
Master Students
Damien Clergeaud, « Interactive juggling », Martin Hachet
Jean Bui Quang, « Hybrid optical bench », Martin Hachet
Julia Schumacher (Master student in Computational Neuroscience, BCCN, Germany), "Explanatory feedback for Brain-Computer Interface training", Fabien Lotte
Loïc Renault (Master student in Neurosciences, Bordeaux University), "The impact of self-paced training on BCI performances", Fabien Lotte
Joao-Pedro berti Ligabo (Master student, Institut d'Optique Graduate School), "EEG signal denoising in OpenViBE", Fabien Lotte and Alison Cellard
Dennis Wobrock (Master student in Cognitics, ENSC), "Physiological computing for 3D user interaction", Fabien Lotte co-supervising with Julien Castet
Morgane Sueur (Master 1 Cognitive Science), "Human Learning in BCIs", Camille Jeunet
Aurélien Appriou (Master 1 Cognitive Science), "Assessing stereoscopy with EEG", Jérémy Frey
Other Supervision
Bachelor : supervision of student project about "Using BCIs for stroke rehabilitation", Bordeaux University, France, Camille Jeunet
Master : supervision of student project on 3D tangible tabletops, M2, Bordeaux University, France, Anke Brock and Renaud Gervais
Thesis Reviewer: Thi Thuong Huyen Nguyen, November 2014, INSA de Rennes, "Proposition of new metaphors and techniques for 3D interaction and navigation preserving immersion and facilitating collaboration between distant user », Martin Hachet
Thesis Reviewer: Matthieu Duvinage, University of Mons, Belgium, 2014, Fabien Lotte
Thesis Reviewer: Wojciech Samek, TU Berlin, Germany, 2014, Fabien Lotte
Thesis Examiner: Yosra Rekik, December 2014, Université Lille 1, « comprendre, modéliser, et concevoir l’interaction gestuelle tactile", Martin Hachet
Thesis Examiner: Romain Trachel, Inria Sophia-Antipolis/Université de Marseille, France, 2014, Fabien Lotte
Thesis Examiner: Damien Lesenfants, Liège University, Belgium, 2014, Fabien Lotte
Thesis Examiner: Jonathan Grizou, Inria Bordeaux Sud-Ouest/Bordeaux University, France, 2014, Fabien Lotte
Popularization talks
Presentation about Inria to High School Students during Aquitec, January (C. Jeunet)
Presentation about Cognitive Sciences and Inria to High School Students at a local high school, March (C. Jeunet)
Tangible and Gestural Interaction Forum (http://
Talk about Brain Computer Interfaces at "Café de la Connaissance", May (A. Cellard and C. Jeunet)
New European Media Summit, Brussels, Belgium , September 30th (J. Laviole, F. Lotte)
Presentation on "How to design the computers of the future" and speedmeeting during "Filles et mathématiques: une équation lumineuse" (event to raise interest in mathematics in female high school students), University Toulouse, December 10th (A. Brock)