The overall objective of Potioc is to design, to develop, and to evaluate new approaches that provide rich interaction experiences between users and the digital world. Thus, we aim at stimulating motivation, curiosity, engagement, or pleasure of use. In other words, we are interested in popular interaction, mainly targeted at the general public.
We believe that such popular interaction may enhance learning, creation, entertainment or popularization of science that are the main application areas targeted by our project-team. To this end, we explore input and output modalities that go beyond standard interaction approaches which are based on mice/keyboards and (touch)screens. Similarly, we are interested in 3D content that offers new opportunities compared to traditional 2D contexts. More precisely, Potioc explores interaction approaches that rely notably on interactive 3D graphics, augmented and virtual reality (AR/VR), tangible interaction, brain-computer interfaces (BCI) and physiological computing.
Such approaches hold great promises in a number of fields. For example, interactive 3D graphics have become ubiquitous in the industry, where they have revolutionized usages, notably by improving work cycles for conception or simulation tasks. However, except for video games, we believe that such approaches are still far from being exploited to their full extent outside such industrial contexts despite having a huge potential for the masses in the areas targeted by our project.
In order to design interactive systems that can be beneficial to many people, and not only expert users, we propose to change the usual design approaches that are generally driven by criteria such as speed, efficiency or precision. Instead, we give more credit to the user experience, in particular criteria such as interface appeal and enjoyment arising from the interface use. Indeed, these criteria have been often neglected in academic research whereas we believe they are crucial for users who are novices with 3D interaction, multisensory spaces, or brain-computer interfaces. An interface with a strong appeal and enjoyment factor will motivate users to use and benefit from the system.
In the Potioc team, we follow a multidisciplinary approach in order to tackle the problem as a whole, from the most fundamental works on human sensori-motor and cognitive abilities and preferences, to the aspects that are linked to the usage and applications, passing through the technical aspects of the interaction, both at a hardware and software level.
The project of team potioc is oriented along three axes:
Understanding humans interacting with the digital world
Creating interactive systems
Exploring new applications and usages
These axes are depicted in Figure .
Objective 1 is centered on the human sensori-motor and cognitive abilities, as well as user strategies and preferences, for completing interaction tasks. Our target contribution for this objective are a better understanding of humans interacting with interactive systems. The impact of this objective is mainly at a fundamental level.
In objective 2, our goal is to create interactive systems. This may include hardware parts where new input and output modalities are explored. This also includes software parts, that are strongly linked to the underlying hardware components. Our target contribution in objective 2 is to develop (hardware/software) interaction techniques allowing humans to perform interaction tasks.
Finally, in objective 3, we consider interaction at a higher level, taking into account factors that are linked to specific application domains and usages. Our target contribution in this area is the exploration and the emergence of new applications and usages that take benefit from the results of the project. With this objective, we target mainly a societal impact.
Of course, strong links exist between the three objectives of the project. For example, the results obtained in objective 1 guide the development of objective 2. Conversely, new systems developed in objective 2 may feed research questions of objective 1. There are similar links with objective 3.
Our first objective is centered on the human side. Our finality is not to enhance the general knowledge about the human being as a research team in psychology would do. Instead, we focus on human skills and behaviors during interaction processes. To this end, we conduct experiments that allow us to better understand what users like, where and why they have difficulties. Thanks to these investigations, we are able to design interaction techniques and systems (described in Objective 2) that are well suited to the targeted users. We believe that this fundamental piece of work is the first step that is required for the design of usable popular interactions. We are particularly interested in 3D interaction tasks for which we design dedicated experiments. We also explore a new approach based on physiological and brain (ElectroEncephaloGraphy - EEG) signals for the evaluation of these interactions.
Interacting with digital content displayed on 2D screens has been extensively studied in HCI. On the other hand, less conventional contexts have been studied less. This is the case of 3D environments, immersive virtual environments, augmented reality, and tangible objects. With the final goal of making interaction in such contexts user-friendly, we conduct experiments to better understand user strategies and performance. This allows us to propose guidelines to help designers creating of tools that are accessible to non-expert users.
Recently, physiological computing has been shown to be a promising complement to Human-Computer Interfaces (HCI) in general, and to 3D User Interfaces (3DUI) in particular, in several directions. Within this research area, we are interested in using various physiological signals, and notably EEG signals, as a new tool to assess objectively the ergonomic quality of a given (3D) UI, to identify where and when are the pros and cons of this interface, based on the user’s mental state during interaction. For instance, estimating the user’s mental workload during interaction can give insights about where and when the interface is cognitively difficult to use. This could be useful for 2D HCI in general, and even more for 3DUI. Indeed, in a 3DUI, the user perception of the 3D scene – part of which could potentially be measured in EEG - is essential. Moreover, the usual need for a mapping between the user inputs and the corresponding actions on 3D objects make 3DUI and interaction techniques more difficult to assess and to design.
Although very promising for numerous applications, BCIs mostly remain prototypes not used outside laboratories, due to their low reliability. Poor BCI performances are partly due to imperfect EEG signal processing algorithms but also to the user who may not be able to produce reliable EEG patterns. Indeed, BCI use is a skill, requiring the user to be properly trained to achieve BCI control. If he/she cannot perform the desired mental commands, no signal processing algorithm can identify them. Therefore, rather than improving EEG signal processing alone, an interesting research direction is to also guide users to learn BCI control mastery. We aim at addressing this objective. We are notably exploring theoretical models and guidelines from educational sciences to improve BCI training protocols. We also study which users’ profiles (personality and cognitive profile) fail or succeed at learning BCI control. Finally, we explore new feedback types and new EEG visualization techniques in order to help users gain BCI control skills more efficiently. These new feedback and visualizations notably aim at providing BCI users with more information about their EEG patterns, in order to identify more easily relevant BCI control strategies, as well as motivating and engaging them in the learning task.
Interaction capabilities and needs largely depend on the target user group. In the Potioc project-team, we work with people having special needs. As an example, we work with children in the context of education, which requires us to design interfaces that are usable, engaging and support learning for this target group. Furthermore, we work with people with cognitive or perceptive disabilities, which requires us to consider accessibility, while at the same time designing interfaces that are learnable and enjoyable to use. In order to meet the needs of the different target groups, we apply participative and user-centred design methods.
Our objective here is to create interactive systems and design interaction techniques dedicated to the completion of interaction tasks. We divide our work into three main categories:
Interaction techniques based on existing Input/Output (IO) devices.
New IO and related techniques.
BCI and physiological computing.
When using desktop IOs (i.e., based on mice/keyboards/monitors), a big challenge is to design interaction techniques that allow users to complete 3D interaction tasks. Indeed, the desktop IO space that is mainly dedicated to the completion of 2D interaction task is not well suited to 3D content and, consequently, 3D user interfaces need to be designed with a great care. In the past few years, we have been particularly interested in the problem of interaction when the 3D content is displayed on a touchscreen. Indeed, standard (2D) HCI has evolved from mouse to touch input, and numerous research projects have been conducted. On the contrary, in 3D, very little work has been proposed. We are contributing to moving desktop 3D UIs from the mouse to the touch paradigm; what we used to do with mice in front of a screen does not work well on touch devices anymore. In the future, we will continue designing new interaction techniques that are based on standard IOs (eg. pointing devices and webcams) and that target the main objectives of Potioc which are to enhance the interaction bandwidth for non expert users.
Beyond standard IOs, we are interested in exploring new IO modalities that may make interaction easier, more engaging and motivating. In Potioc, we design new interactive systems that exploit unconventional IO modalities such as stereoscopy, 3D spatial input, augmented reality and so on. In particular, tangible interaction and spatial augmented reality are major subjects of interest for us. Indeed, we believe that manipulating directly physical objects for interacting with the digital world has a great potential, in particular when the general public is targeted. With such approaches, the computer disappears, and the user interacts with the digital content as he or she would do with physical content, which reduces the distance to the manipulated content. As an example, we recently designed Teegi, a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. Such unconventionnal user interfaces that are based on rich sensing modalities hold great promises in the field of popular interaction.
We are also interested in designing systems that combine different sensory modalities, such as vision, touch and audition. Concrete examples include the design of tangible user interfaces or interfaces for visually impaired people. It has been shown that multimodality can provide rich interaction that can efficiently support learning, and it is also important in the context of accessibility.
Although Brain-Computer Interfaces (BCI) have demonstrated their tremendous potential in numerous applications, they are still mostly prototypes, not used outside laboratories. This is mainly due to the following limitations:
Performances: the poor classification accuracies of BCIs make them inconvenient to use or simply useless compared to available alternatives
Stability and robustness: the sensibility of ElectroEncephaloGraphic (EEG) signals to noise and their inherent non-stationarity make the already poor initial performances difficult to maintain over time
Calibration time: the need to tune current BCIs to each user’s EEG signals makes their calibration times too long.
As part of our research on EEG-based BCIs, we notably aim at addressing these limitations by designing robust EEG signal processing tools with minimal calibration times, in order to design practical BCI systems, usable and useful outside laboratories. To do so we explore the design of alternative features and robust spatial filtering algorithms to make BCIs more robust to noise and non-stationarities, as well as more accurate. We also explore artificial EEG data generation and user-to-user data transfer to reduce calibration times.
Objective 3 is centered on the applications and usages. Beyond the human sensori-motor and cognitive skills (Objective 1), and the hardware and software components (Objective 2), Objectives 3 takes into account broader criteria for the emergence of new usages and applications in various areas, and in particular in the scope of education, art, popularization of science and entertainment. Our goal here is not to develop full-fledged end-user applications. Instead, our contribution is to stimulate the evolution of current applications with new engaging interactive systems.
Education is at the core of the motivations of the Potioc group. Indeed, we are convinced that the approaches we investigate—which target motivation, curiosity, pleasure of use and high level of interactivity—may serve education purposes. To this end, we collaborate with experts in Educational Sciences and teachers for exploring new interactive systems that enhance learning processes. We are currently investigating the fields of astronomy, optics, and neurosciences. We are also working with special education centres for the blind on accessible augmented reality prototypes. In the future, we will continue exploring new interactive approaches dedicated to education, in various fields.
Popularization of Science is also a key domain for Potioc. Focusing on this subject allows us to get inspiration for the development of new interactive approaches. In particular, we have built a strong partnership with Cap Sciences, which is a center dedicated to the popularization of science in Bordeaux that is visited by thousands of visitors every month. This was initiated with the ANR national project InSTInCT, whose goal was to study the benefits of 3D touch-based interaction in public exhibitions. This project has led to the creation of a Living Lab where several systems developed by Potioc have been tested and will be tested by the visitors. This provides us with very interesting observations that go beyond the feedback we can obtain in our controlled lab-experiments.
Art, which is strongly linked with emotions and user experiences, is also a target area for Potioc. We believe that the work conducted in Potioc may be beneficial for creation from the artist point of view, and it may open new interactive experiences from the audience point of view. As an example, we are working with colleagues who are specialists in digital music, and with musicians. We are also working with jugglers and mockup builders with the goal of enhancing interactivity and user experience.
Similarly, entertainment is a domain where our work may have an impact. We notably explored BCI-based gaming and non-medical applications of BCI, as well as mobile Augmented Reality games. Once again, we believe that our approaches that merge the physical and the virtual world may enhance the user experience. Exploring such a domain will raise numerous scientific and technological questions.
Our project aims at providing rich interaction experiences between users and the digital world, in particular for non-expert users. The final goal is to stimulate understanding, learning, communication and creation. Our scope of applications encompasses
education
popularization of science
art
entertainment
See "Objective 3: Exploring new applications and usages" () for a detailed description.
ERC Grant "BrainConquest : Boosting Brain-Computer Communication with high Quality User Training" (Fabien Lotte)
EFRAN project e-tac "Tangible and augmented interface for collaborative learning" (Martin Hachet)
First accessible MOOC on "Accessibilité numérique" https://
PapARt is a software development kit (SDK) that enables the creation of interactive projection mapping (See https://
Participants: Jeremy Laviole, Martin Hachet
Helios is a software tool (Unity3D) we have developed in collaboration with Stéphanie Fleck from Université de Lorraine. It is dedicated to the learning of astronomy at school. It bases on augmented reality and tangible interaction. See Section .
Participants: Robin Gourdel, Jérémy Laviole, Benoit Coulais, Martin Hachet.
Partners: Université de Loraine - SATT Nancy Grand-Est.
We have developped Aïana, a MOOC player, with the support of the Inria MOOC Lab. Aïana offers original interaction features in order to enable a wide spectrum of users including persons with disabilities. The first version of Aïana has been used by the 3700 participants of the Digital Accessibility MOOC we have produced on the national MOOC platform FUN. See Section .
Participants: Pierre-Antoine Cinquin, Pascal Guitton
Partners: LearningLab Inria
Along with the project HOBIT (see Section ), we continue enhancing the platform that is dedicated to the simulation and augmentation of optics experiments. In particular, this year, we did a major evolution that consists in making the system reconfigurable. Various optical components be plugged in, and the simulation and augmentations are updated accordingly.
Participants: Benoit Coulais, David Furio, Martin Hachet.
Partners: Université de Bordeaux - IUT de Bordeaux, LaBRI, IMS, CELIA
Participants: David Furio, Benoit Coulais, Martin Hachet
Practical work in optics learning allows supporting the construction of knowledge, in particular when the concept to be learned remains diffuse. To overcome the limitations of the current experimental setups, we have designed a hybrid system that combines physical interaction and numerical simulation. This system relies on 3D-printed replicas of optical elements, which are augmented with pedagogical information (see Figure ). In a first step, we have focused on the well-known Michelson interferometer experiment, widely studied in under graduate programs of Science. A 3-months user study with 101 students and 6 teachers showed that, beyond the practical aspects offered by this system, such an approach enhances the technical and scientific learning compared to a standard Michelson interferometer experiment. A second version of HOBIT is currently being developed. This new version will let us simulate and augment multiple experiments related with optics, like polarization or Young’s interferometer.
A paper presenting HOBIT has been (conditionaly) accepted at ACM CHI 2017.
Participants: Joan Sol Roo, Renaud Gervais, Jeremy Frey, Martin Hachet
Digital technology has completely integrated our daily lives; we use it for entertainment, productivity and our social lives. However, the potential of leveraging technology to improve its users' overall happiness and life satisfaction is still largely untapped. Mindfulness, the act of paying a deliberate and non-judgmental attention to the present moment, has been shown to have a positive impact on a person's subjective well-being. With this in mind we created Inner Garden, an ambient mixed reality installation, inspired by a zen garden, comprised of an augmented sandbox along with a virtual reality modality to support mindful experiences (Figure . By shaping the sand, the user creates a living miniature world that is projected back onto the sand. Moreover, using a VR headset, she can take a moment to herself by actually going inside her own garden to meditate. The natural elements of the garden are connected to real-time physiological measurements, such as breathing, helping staying focused on the body. We evaluated the system through a first user study and consulted meditation teachers, who envisioned the use of the garden in their teaching, especially for novice practitioners. The reception of the system seems to indicate that technology can, when designed carefully, both engage the users and foster well-being.
A paper presenting Inner Garden has been (conditionaly) accepted at ACM CHI 2017.
Participants: Joan Sol Roo and Martin Hachet
Spatial Augmented Reality (SAR) allows a user, or a group of users, to benefit from digital augmentations embedded directly into the physical world. This enables co-located information and unobstructed interaction. On the other hand, SAR suffers from limitations that are inherently linked to its physical dependency, which is not the case for see-through or immersive displays. In this work, we explore how to facilitate the transition from SAR to VR, and vice versa, integrating both into a unified experience (Figure ). We developed a set of interaction techniques and obtained first feedback from informal interviews.
A technote presenting this work has been (conditionaly) accepted at IEEE 3DUI 2017.
Participants: Robin Gourdel, Benoit Coulais, Jeremy Laviole, Martin Hachet
We have worked with Stephanie Fleck (Université de Lorraine) to improve the leraning environment AIBLE she had imagined (see http://
Helios basically consists of a standard laptop computer, a webcam, printable AR markers placed on tangible props and on dedicated pedagogical cards (See Figure ). The (virtual) celestial bodies are displayed on the screen, and many visual feedback help understanding various phenomena (e.g. shadow cones, time zones, and so on). In , we discuss how such an approach allows learners to better understand abstract phenomena.
Participants: Damien Clergeaud and Pascal Guitton
The Airbus company regularly uses virtual reality for design, manufacturing and maintenance. We work with them on collaborative interaction in order to enable an efficient collaboration between operators immersed in the same virtual environment from remote locations and with heterogeneous equipment (large displays, CAVE, HMD). More precisely, we have developped tools to point and manipulate 3D objects, to remotely visualize the virtual environment, to be aware of remote manipulations and to describe tools and components trajectories (Figure ). These tools have been validated by Airbus experts and the next step is to integrate them in their global process.
We are also working on Through-The-Lens Interaction techniques to ease the collaboration in some asymmetric tasks that requires a guide and an operator. Through-The-Lens techniques enable the guide to interact with the surroundings of the operator in order to help him in the task he has to perfrom. A paper presenting such a technique has been (conditionaly) accepted at IEEE 3DUI 2017.
Participants: Martin Hachet
Together with Florent Berthaut (Univeristé Lille 3), we presented a set of works that attempts to extent the frontiers of music creation as well as the experience of audiences attending digital performances. Indeed, the power of interactive 3D graphics, immersive displays, and spatial interfaces is still under-explored in domains where the main target is to enhance creativity and emotional experiences. The goal of our work is to connect sounds to interactive 3D graphics that musicians can interact with and the audience can observe .
Participant: Anke Brock
Visually impaired people face important challenges related to orientation and mobility. Accessible geographic maps are helpful for acquiring knowledge of an urban environment. Historically, raised-line paper maps with braille text have been used, but these maps possess significant limitations. For instance, only a small percentage of the visually impaired population reads braille. Recent technological advances have enabled the design of accessible interactive maps with the aim to overcome these limitations. We designed Mappie, an accessible interactive map prototype based on the use of a multi-touch screen with a raised-line map overlay and speech output (Figure ). Then, we deployed Mappie in a class of seven children and one caretaker during three months. Our formative study showed promising results and allowed insights in the design of accessible interactive maps, such as using olfactory and gustatory modalities to foster reflective learning, and using tangible objects to support storytelling. Following this first study, we designed MapSense as an extension of Mappie. MapSense uses the same hardware and interaction techniques as Mappie, but additionally provides fourteen ”Do-It-Yourself” conductive tangibles. Some tangibles could be filled with scented material, such as olive oil, smashed raisins or honey, thus creating a real multi-sensory experience. The map was explored during two classes of three hours separated by a week, taught conjointly by a locomotion trainer and a specialized teacher. We observed that the map and tangible objects triggered strong positive emotions and stimulated spatial learning as well as creativity of the visually impaired students. This work has been conducted as part of the PhD thesis of Emeline Brulé under the supervision of Gilles Bailly and Annie Gentes, and in collaboration with the IRIT research lab in Toulouse. It has been published at CHI'16 .
As part of the postdoc of Stephanie Giraud at IRIT Toulouse under the supervision of Christophe Jouffrais, we have investigated how to print entire interactive maps in 3D, allowing users to construct a city like using a puzzle. We have conducted a user study comparing an interactive map that has been entirely 3D printed to a raised-line map with braille text (Figure left). Our results suggest that the interactive map is significantly more effective for providing spatial knowledge than a tactile paper map with braille.
Participants: Julia Chatain, Anke Brock, Martin Hachet
With recent technological advances, the shapes of mobile devices are evolving. For example, we now see the emergence of new types of devices in form of autonomous aerial vehicles (drones) that become available in our everyday environment. As drones are becoming increasingly autonomous, it is crucial to understand how interaction with such devices will happen. These new devices, allow us to imagine new contexts of map usage, as for instance drone-based autonomous tour guides ((Figure ). In order for those scenarios to happen, many problems need to be investigated. From a perspective in Human-Computer Interaction (HCI), the first questions to study are related to suitable input and output techniques. We iteratively designed interaction techniques for Navidrone, a drone-based autonomous tour guide. This work has been done in collaboration with the Prof. James Landay and Dr. Jessica Cauchard from the Stanford HCI Group.
Participants: Pierre-Antoine Cinquin and Pascal Guitton
E-learning systems, such as MOOC or serious games, are increasingly taking part in training process. Unfortunately, like most digital systems, they suffer from a lack of accessibility, in particular for people with cognitive disabilities (e.g. who have limited attention and memory). In this project, we develop a framework based on various disciplinary fields (education, cognitive sciences, human factors) but also participatory design research with students with disabilities to design interfaces promoting e-learning accessibility. From this framework, we have designed interaction features which have been implemented in a specific MOOC player called Aïana. Moreover, we have produced a MOOC on digital accessibility which is published on the national MOOC platform (FUN) using Aïana. We are currently working on the analysis of this first play in order to enhance Aïana by designing new interaction modalities.
Participants : Thibault Lainé, Renaud Gervais, Jérémy Frey, Hugo Germain, Fabien Lotte, Martin Hachet
Teegi is an interactive systems that combines electroencephalographic (EEG) recordings and tangible interaction in order to let novices learn about how their brain works. By displaying EEG activity in real time on a support that is easy to manipulate and to comprehend, Teegi is a good tool for scientific outreach, that raises public interest.
While last year we developed a semi-spherical display based on LEDs, we push the project further during 2016 and built a complete autonomous puppet (Figure ). The robot can move its two hands independently or its feet, and it can close its eyes. Beside the display of EEG activity, Teegi can react accordingly to the brain patterns recorded in real time from the user.
We demonstrated this new prototype in various occasions over the year, during public events such as “Fête de la Science” in La Cité des Sciences in Paris, a manifestation that gathered thousands of visitors (See Section "Popularization").
Participants : Jérémy Frey, Martin Hachet, Fabien Lotte
3D user interfaces are increasingly used in a number of applications, spanning from entertainment to industrial design. However, 3D interaction tasks are generally more complex for users since interacting with a 3D environment is more cognitively demanding than perceiving and interacting with a 2D one. As such, it is essential that we could finely evaluate user experience, in order to propose seamless interfaces. To do so, a promising research direction is to measure users' inner states based on brain signals acquired during interaction, by following a neuroergonomics approach. Combined with existing methods, such tool can be used to strengthen the understanding of user experience.
In , we reviewed the related work in this area. We summurized what has been achieved and the new challenges that arise. We described how a mobile brain imaging technique such as electroencephalography (EEG) brings continuous and non-disruptive measures. EEG-based evaluation of users can give insights about multiple dimensions of the user experience, with realistic interaction tasks or novel interfaces. We investigate four constructs: workload, attention, error recognition and visual comfort. Ultimately, these metrics could help to alleviate users when they interact with computers.
Such advance in the understanding of the users will eventually come forward thanks to the increasing dissemination of non-invasive brain imaging devices that record electrical activity onto the scalp. In we compared side by side two EEG amplifiers, the consumer grade OpenBCI and the medical grade g.tec g.USBamp. We suggested how an affordable and open-hardware device could facilitate, beside neuroergomomics, the appearance of various brain-computer interfaces applications.
Participants : Jérémy Frey
While physiological sensors enter the mass market and reach the general public, they are still mainly employed to monitor health. Over the course of a thesis that explored the new possibilities offered by physiological computing in terms of communication and social presence, we described several use-cases involving the externalization of inner states through novel user interfaces.
For example, we created an application that uses heart rate feedback as an incentive for social interactions. A traditional board game was “augmented” through remote physiological sensing (Figure ), using webcams to account for the subtle changes in blood flow that occur with each heartbeat. Projection helped to conceal the technological aspects from users and merged the biofeedback with the physical environment. We detailed how players reacted – stressful situations could emerge when users are deprived from their own signals – and we gave directions for game designers to integrate physiological sensors.
We envisioned a second application, that merges virtual reality, interactive fiction and physiological computing in order to craft truly immersive stories; narratives that evolve depending both on the actions and on the inner states of the user/reader, stretching a medium that shaped for ages humanity (Figure ) .
Participants : Fabien Lotte
Although promising from numerous applications, current Brain-Computer Interfaces (BCIs) still suffer from a number of limitations. In particular, they are sensitive to noise, outliers and the non-stationarity of ElectroEncephaloGraphic (EEG) signals, they require long calibration times and are not reliable. Thus, new approaches and tools, notably at the EEG signal processing and classification level, are necessary to address these limitations. Riemannian approaches, spearheaded by the use of covariance matrices, are such a very promising tool slowly adopted by a growing number of researchers. We proposed a review of how these approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction. Finally, we also identified relevant challenges and promising research directions for EEG signal classification in BCIs, such as feature tracking on manifold or multi-task learning .
Participants : Camille Jeunet, Fabien Lotte
Mental Imagery-based Brain-Computer Interface (MI-BCI) enable their users to send commands to computer by imagining mental tasks (i.e., by performing MI) that are recognized in their brain signals. This type of BCI requires user training, and this training is currently poorly understood, and we basically do not know, who can learn MI-BCI control, what is to learn and how to learn it efficiently. Moreover, we have shown that current MI-BCI training protocols were both theoretically and practically inappropriate, and that there is a lack of fundamental knowledge on BCI user training, which prevents us from designing better user training approach .
In order to address these points, we first proposed a review and classification of cognitive and psychological predictors of MI-BCI performance. Three categories were defined: the user-technology relationship, attention and spatial abilities. The user-technology relationship refers to personality traits and states that influence users' perception of the technology and consequently impact the way they will interact with the technology (i.e., their feeling of being in control, their self-efficacy, etc.). The attention category gathers, among others, users' attentional abilities, motivation and engagement towards the task. These elements are essential to learn, whatever the skill targeted. They are also closely related to the user-technology relationship (for instance, feeling in control will increase users' engagement towards the task, thus they will allocate more attentional resources to the task). Finally, spatial abilities are the ability to produce, manipulate and transform mental images, which is closely related to the ability to control an MI-BCI. The description of these categories and of their neurophysiological correlates enabled us to submit ideas to improve MI-BCI user-training. For instance, we explained how to reduce computer-anxiety and increase the sense of agency, notably through the use of a positively biased feedback for novice users. Also, we proposed solutions to raise and improve attention, e.g., using neurofeedback or meditation. Finally, we argued that spatial abilities could be trained to improve users' capacity to perform mental imagery and consequently, potentially improve their MI-BCI performance .
We also did a review of the literature of current training protocols (published as a book chapter in ) which suggests that these protocols are, at least theoretically, inappropriate to acquire a skill and thus that they could be one of the factors responsible for inefficient MI-BCI user-training. In particular, participants are most of the time provided with uni-modal and evaluative feedback while literature recommends multi-modal, informative and supporting feedback. Although instructive, these insights only provide theoretical considerations about the flaws associated with the feedback approaches used in MI-BCI. It was therefore necessary to concretely assess whether standard MI-BCI feedback is appropriate to train a skill, and to what extent the feedback impacts BCI performance and skill acquisition. In order to experimentally evaluate the extent to which such a feedback has an impact on their ability to acquire a skill, we used it to teach users to perform simple motor tasks. Results (N=53 participants) revealed that with this feedback, 17% did not manage to learn the skill. This suggests that current BCI feedback is most probably suboptimal to teach a skill. A sub-group of our participants (N=20) then took part in a motor-imagery based BCI experiment. Results showed that those who struggled during the first experiment improved in performance during the second, while the others did not. We hypothesised that these results are linked to the considerable cognitive resources required to process this feedback .
It should be noted that there are many connections between BCI user training, and neurofeedback training for clinical applications, both field aiming at training their users to perform self regulation of their brain activity. We have therefore shown how these two field share fundamental research questions on BCI user training, and how they can both benefit from each other .
Participants : Suzy Teillet, Camille Jeunet, Fabien Lotte
The results of one of our previous studies suggested that users' MI-BCI performance correlates with their spatial abilities , which is consistent with the literature. This result was replicated in a second study in a pure motor-imagery based BCI . We thus decided to explore the potential causal relationship between both: would an improvement of spatial abilities lead to better MI-BCI performances? To try to answer this question, we designed and implemented a spatial ability (SA) training procedure (see Figure ). Then, we performed two user studies to validate the SA training procedure: results suggest that it efficiently improves participants' SA . Consequently, we included this SA training procedure in an MI-BCI protocol. Results (N=24) showed no difference in classification accuracy between participants performing 6 MI-BCI sessions and those performing 3 SA and 3 MI-BCI sessions. Nonetheless, SA training intensity impacted users' progression, and neurophysiological analyses provided us with valuable insights into brain pattern evolution throughout the training process.
Participants : Léa Pillette, Camille Jeunet, Boris Mansencal, Fabien Lotte
Mental-Imagery based Brain-Computer Interfaces (MI-BCI) are neurotechnologies enabling users to control applications using their brain activity alone. These neurotechnologies are very promising. However, existing training protocols do not enable every user to acquire the skills needed to use them. Indeed, those protocols are not consonant with psychological field recommendations. In particular, the current protocols do not provide social presence and emotional support to the user. Therefore, we designed and tested PEANUT, the first Learning Companion dedicated to the improvement of MI-BCI user-training. PEANUT has been designed throughout a combination of recommendations from the literature, the analysis of data from previous experiments and user-studies. He provides emotional support using spoken sentences, such as "`C'est avec la pratique que l'on progresse"', and facial expressions. Experiments were conducted in order to assess his influence on user's performance and experience. The first results indicate that PEANUT improves the user experience. Indeed, people who trained with PEANUT found it was easier to learn and memorize how to use the MI-BCI system and rated themselves more efficient and effective than participants who had no learning companion. These results indicate that using PEANUT does benefit MI-BCI user training. Future research will keep focusing on how to provide adapted cognitive and emotional feedback to MI-BCI users thanks to the use of learning companions.
Participants : Jelena Mladenović, Jérémy Frey, Fabien Lotte
There are two main approaches engaged in improving BCI systems: (i) improving the machine learning techniques, and the newly introduced (ii) improving human learning, by using the knowledge from instructional design and positive psychology. Both agree that the system needs to be adapted to the user but rely on different sources of adaptation: the machine for the former and the brain for the latter. In particular, machine learning algorithms should adapt to non-stationary brain signals, while human learning approaches should adapt the system to the various users' skills and profiles. Including both aspects of adaptation would give rise to a system ready to be used in real life conditions. However, a major obstacle lies in the large spectrum of sources of variability during BCI use, ranging from (i) imperfect recording conditions (e.g., environmental noise, humidity, static electricity etc. to (ii) the fluctuations in the user's psychophysiological states, due to e.g., fatigue, motivation or attention. For these reasons a BCI has not yet proved to be reliable enough to be used outside the laboratory. Particularly, it is still almost impossible to create one BCI design effective for every user, due to large inter subject variability. Therefore, the main concerns are to create a more robust system with the same high level of success for everyone, at all times, and to improve the current usability of the system. This calls for adaptive BCI training and operation.
We propose a conceptual framework which encompasses most important approaches to fit them in such a way that a reader can clearly visualize which elements can be adapted and for what reason. In the interest of having a clear review of the existing adaptive BCIs, this framework considers adaptation approaches for both the user and the machine, i.e., referring to instructional design observations as well as the usual machine learning techniques. It provides not only a coherent review of the extensive literature but also enables the reader to perceive gaps and flaws in current BCI systems, which would, hopefully, bring novel solutions for an overall improvement.
The framework (see Figure ) has a hierarchical structure, from the lowest level elements which endure rapid changes, to the highest level elements which change at a much slower rate. It is comprised of: (i) one or several BCI systems/pipelines; (ii) a user model, whose elements are arranged according to different time scales ; (iii) a task model, enabling the system adaptation with respect to the user model; (iv) the conductor, an intelligent agent which implements the adaptive control of the whole system. A book chapter on this framework was submitted to a handbook on BCI.
Using BCI systems can be very frustrating for people because it is not trivial and so it takes time to master. Differently from other learning procedures, BCIs do not have enough, if any explanatory feedback in assisting the learning of users. Also, as the feedback is not engaging the user's mind might easily wander off, which highly affects the system's accuracy as well as the person's learning pace. For this reason it takes more time to train a user to understand the procedure and have control over the system. Hence, we want to create an immersive and playful environment to attract the user's attention and help them learn with less effort and frustration.
We rely on the theory of Flow, introduced by Csikszentmihalyi in the 1970s. Flow is a state of enjoyment while effortlessly focused on a task so immersive that one looses the perception of time. In order to fulfil these requirements, we choose the users to be involved in an open-source video game called Tux Racer. Also, to ensure the maximal attention of the users, the game difficulty adapts according to users performance in real-time.
Participants : Jérémy Frey, Camille Jeunet, Fabien Lotte
Together with Maureen Clerc (Inria Sophia) and Laurent Bougrain (Inria Nancy), we co-edited the first book on Brain-Computer Interfaces in French , , this book being also translated into English , . It is published in two volumes, and co-written with researchers from all over France from many disciplines related to BCI. It covers both theoretical and practical aspects, as well as the neuroscience, mathematics, psychology, computer science, engineering, and ethical aspects of BCI. It is aimed at being a key resource for anyone who wants to start BCI research or want to deepen their knowledge in the many aspects of this exciting discipline.
Interactive Collaboration in Virtual Reality for Aerospace Scenarii:
Duration: 2014-2017
PhD Thesis of Damien Clergeaud
Partners: Airbus Group
The Airbus company regularly uses virtual reality for design, manufacturing and maintenance. We work with them on collaborative interaction in order to enable an efficient collaboration between operators immersed in the virtual environment from remote locations and with heterogeneous equipment. More precisely, we have developped tools to point and manipulate objects, to remotely visualize the virtual environment, to be aware of remote manipulations and to describe tools and components trajectories (see Section ).
HOBIT: Hybrid Optical Bench for Innovative Teaching:
Duration: 2015-2017
Funding: Idex CPU & LAPHIA, and Inria ADT
Partners: Université de Bordeaux (IUT mesures physiques) & Université de Lorraine
The goal of the Hobit project (Hybrid Optical Bench for Innovative Teaching) is to design a hybrid optical bench that benefits from both the physical and the virtual worlds to enhance teaching and training in the field of optics and photonics (See Section ).
website: https://
OpenStreetMap
Collaboration with Marina Duféal (Assistant Professor in Geography at PASSAGES, UMR 5319, Univ. Bordeaux Montaigne) and Vincent Bergeot (Num&Lib) regarding contribution to OpenStreetMap. We have jointly organized a cartopartie for “Fête de la Science2016” at Inria Bordeaux.
eTAC: Tangible and Augmented Interfaces for Collaborative Learning:
Funding: EFRAN
Duration: 2017-2021
Coordinator: Université de Lorraine
Local coordinator: Martin Hachet
Partners: Université de Lorraine, Inria, ESPE, Canopé, OpenEdge,
the e-TAC project proposes to investigate the potential of technologies ”beyond the mouse” in order to promote collaborative learning in a school context. In particular, we will explore augmented reality and tangible interfaces, which supports active learning and favors social interaction.
ANR Rebel:
Duration: 2016-2019
Coordinator: Fabien Lotte
Funding: ANR Jeune Chercheur Jeune Chercheuse Project
Partners: Disabilities and Nervous Systems Laboratory Bordeaux
Brain-Computer Interfaces (BCI) are communication systems that enable their users to send commands to computers through brain activity only. While BCI are very promising for assistive technologies or human-computer interaction (HCI), they are barely used outside laboratories, due to a poor reliability. Designing a BCI requires 1) its user to learn to produce distinct brain activity patterns and 2) the machine to recognize these patterns using signal processing. Most research efforts focused on signal processing. However, BCI user training is as essential but is only scarcely studied and based on heuristics that do not satisfy human learning principles. Thus, currently poor BCI reliability is probably due to suboptimal user training. Thus, we propose to create a new generation of BCI that apply human learning principles in their design to ensure the users can learn high quality control skills, hence making BCI reliable. This could change HCI as BCI have promised but failed to do so far.
ANR Project ISAR:
Duration: 2014-2017
Coordinator: Martin Hachet
Partners: LIG-CNRS (Grenoble), Diotasoft (Paris)
Acronym: Interaction en Réalité Augmentée Spatiale / Interacting with Spatial Augmented Reality
The ISAR project (Interaction with Spatial Augmented Reality) focuses on the design, implementation, and evaluation of new paradigms to improve interaction with the digital world when digital content is directly projected onto physical objects. It opens new perspectives for exciting tomorrow’s applications, beyond traditional screen-based applications.
website: https://
Inria ADT Artik:
Duration: 2014-2016
Coordinator: Jérémy Laviole & Martin Hachet
The Artik projet is focused on the development of Papart (Paper Augmented Reality Toolkit). Papart is a toolkit that enables projector/cameras (ProCam) and depth camera to work together to create interactive surfaces. It works with comsumer-available hardware and enables tabletop interactions, although high-end cameras and projectors are also well supported. Here are the major advances of the developments of 2015: The hardware is now managed with a dedicated application, each Papart application is now hardware agnostic. Extrinsic calibration of projector / color and depth cameras can be done with any application running, the calibration processing is now below 2 minutes. The touch detection can be tweaked to fit any suface: it has been tested on a table, wall, and floor with respectively finger, hand, and foot interaction. This project relies on open source software, we also maintain the support of Maven distribution for the Processing project.
website: https://
Inria ADT OpenViBE-X:
Duration: 2014-2016
Partners: Inria teams Hybrid and Athena
Coordinator: Maureen Clerc (Inria Sophia Antipolis)
This is the follow-up project of OpenViBE-NT
website: http://
Inria Project Lab BCI-LIFT:
Duration: 2015-2018
Partners: Inria team Athena (Inria Sophia-Antipolis), Inria team Hybrid (Inria Rennes), Inria team Neurosys (Inria Nancy), LITIS (Université de Rouen), Inria team DEMAR (Inria Sophia-Antipolis), Inria team MINT (Inria Lille), DyCOG (INSERM Lyon)
Coordinator: Maureen Clerc (Inria Sophia Antipolis)
The aim is to reach a next generation of non-invasive Brain-Computer Interfaces (BCI), more specifically BCI that are easier to appropriate, more efficient, and suit a larger number of people. With this concern of usability as our driving objective, we will build non-invasive systems that benefit from advanced signal processing and machine learning methods, from smart interface design, and where the user immediately receives supportive feedback. What drives this project is the concern that a substantial proportion of human participants is currently categorized “BCI-illiterate” because of their apparent inability to communicate through BCI. Through this project we aim at making it easier for people to learn to use the BCI, by implementing appropriate machine learning methods and developping user training scenarios.
website: http://
Helios:
Duration: 2015-2016
Partners: Université de Lorraine
Funding: SATT Nancy Grand Est
Coordinator: Stéphanie Fleck (Université de Lorraine)
The Helios project aims to provide a methodology and innovative media for the improvement of learning of basic astronomical phenomena for school groups (8-11 years). As part of this project, Potioc has focused on the development of the final application for augmented reality based and 3D manipulation, for providing a high-fidelity prototype.
Program: ERC Starting Grant
Project acronym: BrainConquest
Project title: Boosting Brain-Computer Communication with High Quality User Training
Duration: 2017-2021
Coordinator: Fabien Lotte
Abstract: Brain-Computer Interfaces (BCIs) are communication systems that enable users to send commands to computers through brain signals only, by measuring and processing these signals. Making computer control possible without any physical activity, BCIs have promised to revolutionize many application areas, notably assistive technologies, e.g., for wheelchair control, and man-machine interaction. Despite this promising potential, BCIs are still barely used outside laboratories, due to their current poor reliability. For instance, BCIs only using two imagined hand movements as mental commands decode, on average, less than 80A BCI should be considered a co-adaptive communication system: its users learn to encode commands in their brain signals (with mental imagery) that the machine learns to decode using signal processing. Most research efforts so far have been dedicated to decoding the commands. However, BCI control is a skill that users have to learn too. Unfortunately how BCI users learn to encode the commands is essential but is barely studied, i.e., fundamental knowledge about how users learn BCI control is lacking. Moreover standard training approaches are only based on heuristics, without satisfying human learning principles. Thus, poor BCI reliability is probably largely due to highly suboptimal user training. In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, I propose to study and statistically model how users learn to encode BCI commands. Then, based on human learning principles and this model, I propose to create a new generation of BCIs which ensure that users learn how to successfully encode commands with high signal-to-noise ratio in their brain signals, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change man-machine interaction as BCIs have promised but failed to do so far.
Program: ERASMUS+
Project acronym: VISTE
Project title: Empowering spatial thinking of students with visual impairment
Duration: 2016-2019
Coordinator: National Technical University of Athens (Greece)
Other partners: Intrasoft International SA (Greece), Casa Corpolui Didatic Cluj (Romania), Liceul Special pentru Deficienti de Vedere Cluj-Napoca (Romania), Eidiko Dimotiko Sxolio Tiflon Kallitheas (Greece)
Abstract: VISTE addresses inclusion and diversity through an innovative, integrated approach for enhancing spatial thinking focusing on the unique needs of students with blindness or visual impairment. However, since spatial thinking is a critical competence for all students, the VISTE framework and associated resources and tools will focus on cultivating this competence through collaborative learning of spatial concepts and skills both for sighted and visually impaired students to foster inclusion within mainstream education. The VISTE project will introduce innovative educational practices for empowering students with blindness or visual impairment with spatial skills through specially designed educational scenarios and learning activities as well as through a spatial augmented reality prototype to support collaborative learning of spatial skills both for sighted and visually impaired students.
Prof. James Landay and Dr. Jessica Cauchard at the Stanford HCI Group (USA) on interaction with maps projected from drones
Prof. Niels Henze (University Stuttgart,Germany) and Prof. Katrin Wolf (Hamburg University of Applied Science, Germany) on mobile applications for visually impaired people
Prof. Pierre Dillenbourg (EPFL, Switzerland) on HCI for Education
DGA-DSTL Project with UK, “Assessing and Optimising Human-Machine Symbiosis through Neural signals for Big Data Analytics”, 2014-2018
Andreas Meinel, University of Freiburg, Germany, Apr. and Dec. 2016
Katrin Wolf, University of Art and Design, Berlin, Germany, Jul. 2016
Fabien Lotte - Visting scientist At RIKEN Brain Science Institute, Cichocki's advanced Brain Signal Processing Laboratory, Wakoshi, Japan, October-November 2016
Camille Jeunet - Uniersity of Sussex (Brigthon - UK) 01/11/2015 - 30/01/2016
Camille Jeunet - UQAM (Montréal - CA) 10/06/2016 - 10/07/2016
“2nd International OpenViBE workshop”, International BCI meeting 2016, Asilomar, CA, USA,2016 (Fabien Lotte)
”IHM et Education”, workshop at IHM conference, Fribourg, Switzerland, Nov. 2016 (Martin Hachet, Anke Brock)
“2nd InternationalOpenViBE workshop”, International BCI meeting 2016, Asilomar, CA, USA,2016 (Fabien Lotte, Camille Jeunet, Jérémy Frey)
“What’s wrong with us? Roadblocks and pitfalls in designing BCI applications”, International BCI meeting, Asilomar, CA, USA, 2016 (Fabien Lotte)
Special session “Human Factors and performance metrics for BMI Training and Operation”,IEEE SMC 2016, Budapest, Hungary, (Fabien Lotte, Camille Jeunet)
Diversity Co-Chair at the ACM CHI’16 conference, San José, USA, 05/2016 (Anke Brock)
Microsoft Student Research Competition at the ACM ASSETS’16 conference, Reno,USA, 10/2016 (Anke Brock)
IEEE VR 2017 (Martin Hachet)
Eurographics STAR 2017 (Martin Hachet)
IHM 2016 (Martin Hachet)
Mobile and Ubiquitous Multimedia MUM 2016 (Anke Brock)
Mobile and Ubiquitous Multimedia MUM 2016 Poster Committee (David Furió, Anke Brock)
1st International Neuroadaptive Technology Conference 2017 (Fabien Lotte)
7th International Brain-Computer Interface Conference, 2017 (Fabien Lotte)
International Conference on Systems, Man and Cybernetics, Brain-MachineInterface Workshop (IEEE SMC) 2016 (Fabien Lotte, Camille Jeunet)
International workshop on Pattern Recognition in NeuroImaging (PRNI) 2016 (Fabien Lotte)
International Brain-Computer Interface Meeting 2016 (publicity committee+ review committee) (Fabien Lotte)
7th Augmented Human International Conference, 2016 (Fabien Lotte)
8th Augmented Human International Conference, 2017 (Fabien Lotte)
ACM ASSETS 2016 (Anke Brock)
Computer Applications and Quantitive Methods in Archaeology 2016 (CAA) (Pascal Guitton)
8th Augmented Human International Conference, 2017 (Fabien Lotte)
7th International Brain-Computer Interface Conference, 2017 (Camille Jeunet)
ACM SIGGRAPH 2016 (Martin Hachet)
IEEE 3DUI 2017 (Martin Hachet)
ACM ISS 2016 (Joan Sol Roo)
ACM CHI 2016 (Fabien Lotte, Anke Brock, Camille Jeunet, Jérémy Frey)
ACM CHI 2017 (Fabien Lotte, Camille Jeunet, Anke Brock, David Furió, Camille Jeunet, Jérémy Frey)
Augmented Humans 2016 (Fabien Lotte)
International BCI Meeting 2016 (Fabien Lotte)
EICS 2016 (Fabien Lotte)
IJCNN 2016 (Fabien Lotte)
PRNI 2016 (Fabien Lotte)
IEEE SMC 2016 (Fabien Lotte, Camille Jeunet)
Eurohaptics 2016 (Anke Brock)
Handicap 2016 (Anke Brock)
HapticsSymposium 2016 (Anke Brock)
ACM IHM 2016 (Anke Brock)
ACM MobileHCI 2016 (Anke Brock)
ACM NordiCHI 2016 (Anke Brock)
ACM TEI 2016 (Anke Brock)
ACM Ubicomp 2016 (Anke Brock)
ACM UIST 2016 (Anke Brock)
Associate Editor in Brain Computer Interfaces (Fabien Lotte)
Associate Editor in Journal of Neural Engineering (Fabien Lotte)
Review Editor for Frontiers in Robotics and AI (Martin Hachet)
Review Editor for Frontiers in Neuroprosthetics (Fabien Lotte)
Review Editor for Frontiers in Human-Media Interaction (Fabien Lotte)
Guest Associate Editor, Frontiers in Robotics and AI, with D. Friedman, on “Brain-Computer Interfaces Technologies forRobotics and Virtual Reality”, 2016 (Fabien Lotte)
TACCESS Special Issue for ASSETS'17 conference (Anke Brock)
Computer and Graphics (Martin Hachet)
Computers and Education (David Furió)
Computational Intelligence and Neurosciences (Fabien Lotte)
Journal of Neural Engineering (Fabien Lotte)
Frontiers in Neurosciences / Frontiers in ICT (Fabien Lotte)
IEEE Transactions on Biomedical Engineering (Fabien Lotte)
IEEE Transactions on Neural Systems and Rehabilitation Engineering (Fabien Lotte)
Le Travail Humain (Fabien Lotte)
ACM TOCHI (Fabien Lotte)
Nature Scientific Reports (Fabien Lotte)
ACM TACCESS (Anke Brock)
Journal of Psychophysiology (Camille Jeunet)
PLOS One (Camille Jeunet)
Progress in Brain Research (Camille Jeunet)
Transaction in Human Machine Systems (Camille Jeunet)
Brain Science (Camille Jeunet)
”Tangible Interaction and Spatial Augmented Reality for Education”, University of Sussex, Jan. 2016 (Martin Hachet).
"Vers des interfaces cerveau-ordinateur populaires", Conférence What’s Up In Your Mind, Paris, Jun 2016 (Jérémy Frey)
”Interaction Homme-Machine pour l’Education : au-delà de la souris et de l’écran”, Colloque Robotique et Education, Bordeaux, Juin 2016 (Martin Hachet).
"Human Learning and Alternative Applications Towards Usable Electroencephalography-based Brain-Computer Interfaces", Max Planck Institute, Tuebingen, Germany, December 2016 (Fabien Lotte)
"The birth and scope of the BrainConquest ERC starting grant project", European Research Day 2016, Tokyo, Japan, November 2016 (Fabien Lotte)
"Towards Usable EEG-based Brain-Computer Interfaces", Tokyo University of Agriculture and Technology, Tokyo, Japan, November 2016 (Fabien Lotte)
“Principles and promises of EEG-based Brain-Computer Interface technologies”, 1st Iranian IBRO/APRC School of Cognitive Neuroscience, Tehran, Iran, September 2016 (Fabien Lotte)
“When Brain-Computer Interaction meets Educational Sciences”, LaBRI general assembly, Bordeaux,France, July 2016 (Fabien Lotte)
“Toward Usable Mental Imagery-based Brain-Computer Interfaces”, Brain and Spine Institute, Paris, France, July 2016 (Fabien Lotte)
« From Neurofeedback to Brain-Computer Interfaces », Neurofeedback workshop in Bordeaux, France, July 2016 (Fabien Lotte)
“Brain-Computer Interaction and Spatial Augmented Reality Research in Potioc team”, Concordia University, Montreal, Canada, June 2016 (Fabien Lotte, Camille Jeunet)
“Latest research results in Brain-Computer Interfaces and Augmented Reality”, Brain and Computers Digital Media Conference, Center for Digital Media, Vancouver, Canada, June 2016 (Fabien Lotte)
« Educational Science Principles for Brain-Computer Interface Design”, Inserm Lyon, France, April 2016 (Fabien Lotte)
"Considering User Training and Alternative Applications to Design Usable EEG-based BCI Technologies”, EPFL, Center for Neuroprosthetics, Geneva, Switzerland, March 2016 (Fabien Lotte)
"Traitement des signaux cérébraux et classification des états mentaux", Journée scientifique de l'IFRATH "Interfaces Cerveau-Ordinateur", Paris, France, February 2016 (Fabien Lotte)
"Reciprocal learning between machines and humans for neurofeedback and BCI", Première Journée Nationale sur le Neurofeedback, Paris, France, January 2016 (Fabien Lotte)
“Interacting with spatial information” , Stanford HCI Group,Stanford University, USA, May 2016 (Anke Brock)
“Interacting with spatial information” , HERE, Berkeley, USA , May 2016 (Anke Brock)
“Interaction avec des cartes géographiques pour tous”, Immersion, Bordeaux, France, April 2016 (Anke Brock, Julia Chatain)
“Interacting with spatial information” , University of Sussex, UK, February 2016 (Anke Brock)
Animation table ronde, Journée URFIST « Vers de nouveaux paradigmes pour l’édition scientifique », Bordeaux, March 2016 (Pascal Guitton)
"L’éthique en Sciences du numérique", Ecole du Management Inria, Paris, September 2016 (Pascal Guitton)
"Physiological computing and spatial augmented reality: reflecting on inner state", Paris Open Source Summit, Paris, November 2016 (Jérémy Frey)
"Transparence algorithmique et éthique", Journée nouveaux arrivants Inria, Saclay, December 2016 (Pascal Guitton)
"Interfaces cerveau-ordinateur : quoi, pourquoi et comment ?", ENSCBP - Media Sciences, Bordeaux, Février 2016 (Camille Jeunet)
"How Cognitive Sciences Can Contribute to Research in Brain-Computer Interaction", National Cognitive Science Conference 2016, San Diego (Camille Jeunet)
"Understanding and Improving Mental-Imagery based Brain-Computer Interface User Training: Towards Efficient, Reliable and Accessible BCIs", University of Oldenburg, October 2016 (Camille Jeunet)
"Understanding and Improving MI-BCI User-Training", University of Freiburg, Germany, November 2016 (Camille Jeunet)
IEEE 3DUI Steering committe - Leader (Martin Hachet)
Member of Jury for recruitment of Researcher (CR2-CR1) Inria Bordeaux (Martin Hachet)
Expert for the Millennium Science Initiative research group evaluation, Chile (Fabien Lotte)
Expert for the « Sapienza »,University of Rome, research projects, Italy (Fabien Lotte)
Expert for the Partenariats Hubert-Curien (PHC) Germaine deStaël, France-Switzerland research projects (Fabien Lotte)
Etude "Panorama du cyberespace dans 3 à 5 ans" - Workshop "Evolutions technologiques", CEIS, CREC (Fabien Lotte)
Member of Inria Cellule de veille et de prospective (Pascal Guitton)
Expert for Credit Impot Recherche (Martin Hachet)
Member of the scientific committee of SCRIME (Martin Hachet)
Member of Inria Bordeaux Sustainable Development Committee (Martin Hachet)
Member of Inria Ethical Committee (COERLE) (Pascal Guitton)
Member of Inria International Chairs Committee (Pascal Guitton)
Responsable of Inria RA2020 Committee (new annual Activity Report) (Pascal Guitton)
Member of Comité de Pilotage de Software Heritage (Pascal Guitton)
Member of Comité de Pilotage Responsabilité Sociétale de l'Université, Université de Bordeaux (Pascal Guitton)
Member of Conseil d'administration Institut d'Optique Graduate School (Pascal Guitton)
Member of Commission de recrutement des Inspecteurs Généraux de l'Education Nationale (IGEN) (Pascal Guitton)
Member of Inria Bordeaux Committee for Technological Developement (Fabien Lotte)
Member of Inria Bordeaux Young Researchers Committe (Anke Brock)
Licence : Jérémy Frey, Unix and Programming, CM-TD, 74.67h eqtd, L1 Computer Science, University of Bordeaux, France
Licence : Damien Clergeaud, Algorithme et Programmation, TD et TP, 32h eqtd, L1 Computer Science, University of Bordeaux, France
Licence : Damien Clergeaud, Algorithmique des structures de données, TD et TP, 32h eqtd, L2 Computer Science, University of Bordeaux, France
Licence : Camille Jeunet, Sciences humaines et méthodes, CM-TD, 18h eqtd, Licence MIASHS, University of Bordeaux, Franc
Master : Jérémy Frey, Programming projects, TD, 18h eqtd, M1 Computer Science, University of Bordeaux, France
Master : Pascal Guitton, Virtual and Augmented Realities, CM, 36h eqtd, M2 Computer Science, University of Bordeaux, France
Master : Pascal Guitton, Digital accessibility, CM, 12h eqtd, M1 Cognitive Science, University of Bordeaux, France
Master : Jérémy Frey, Programming projects, TD, 10h eqtd, M2 Computer Science, University of Bordeaux, France
Master : Pascal Guitton, Assistive technologies, CM, 30h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Anke Brock, Virtual Reality and 3D Interaction, CM-TD, 7,5h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Martin Hachet, Virtual Reality and 3D Interaction, CM, 12h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Fabien Lotte, Virtual Reality and 3D Interaction, CM, 4h eqtd, M2 Cognitive Science, University of Bordeaux, France
Master : Anke Brock, Interaction and Ergonomics, CM-TD, 10h eqtd, 3rd year (M2), Enseirb, Bordeaux, France
Master : Martin Hachet, Interaction and Ergonomics, CM-TD, 8h eqtd, 3rd year (M2), Enseirb, Bordeaux, France
Master: Fabien Lotte, Virtual Reality, Accesibility and Brain-Computer Interfaces, 4h eqtd, 3rd year (M2), ENSSAT, Lannion, France
Master: Fabien Lotte, Brain Computer Interfaces, 6h eqtd, 3rd year (M2), ESIEA, Laval, France
Master : Anke Brock, Human-Computer Interaction, CM-TD, 12h eqtd, M2 SRI, Upsitech Toulouse, France
Master: Fabien Lotte, Human-Computer Interactions, CM-TD, 7.5 eqtd, M1 Cognitive Sciences and Ergonomy, University of Bordeaux, France
Master : Anke Brock, Accessibility of interactive systems, CM-TD, 6h eqtd, M2 IHM, ENAC and University Toulouse, France
Master : Anke Brock, Accessibility of interactive systems, CM-TD, 6h eqtd, M2 Systèmes Mobiles Autonomes Communicants / Internet des Objets (Mobiles), University Bordeaux, France
Master : Camille Jeunet, HCI and Human factors, CM-TD, 18h eqtd, M1 Sciences Cognitives and Ergonomie, University of Bordeaux, France
MOOC : Pascal Guitton and Hélène Sauzéon, "Comment favoriser l'accessibilité numérique", 5 weeks, Plate-forme France Université Numérique (FUN), large audience, initial and continuous training, about 4000 registered people.
PhD: Camille Jeunet, “Improving User training approaches for Brain-Computer Interface", University of Bordeaux, Defense on December 2nd, 2016 (Martin Hachet, Fabien Lotte, co-supervision with Bernard N'Kaoua, and Sriram Subramanian)
PhD in progress: Julia Chatain, "Design and evaluation of augmented geographic maps", University of Bordeaux, since September 2015 (Anke Brock and Martin Hachet)
PhD in progress: Damien Clergeaud, "Collaborative interaction for aerospace scenarios", University of Bordeaux, since November 2014 (Pascal Guitton)
PhD in progress: Joan Sol Roo, "Interaction with Spatial Augmented Reality", University of Bordeaux, since December 2014 (Martin Hachet)
PhD in progress: Jelena Mladenovic,"User Modeling for Adaptive BCI training and operation", University of Bordeaux, since December 2015 (Fabien Lotte, co-supervised with Jérémie Mattout)
PhD in progress: Pierre-Antoine Cinquin,"Design and Experimental Validation of Accessible E-learning systems for people with cognitive disabilities", University of Bordeaux, since September 2016 (Hélène Sauzéon, Pascal Guitton)
PhD in progress: Léa Pillette, "Redefining Formative Feedback in Brain-Computer Interface User Training", University of Bordeaux, since September 2016 (Fabien Lotte, Bernard N'Kaoua)
PhD in progress: Lorraine Perronnet, “Neurofeedback and Brain Rehabilitation based on EEG and fMRI”, Rennes University, since January 2014 (Fabien Lotte, co-supervision with Anatole Lécuyer, Christian Barillot, Inria Rennes and Maureen Clerc, Inria Sophia Antipolis)
PhD in progress: Stephanie Lees, “Assessing and Optimising Human-Machine Symbiosis through Neural signals for Big Data Analytics”, Ulster University, since February 2014 (Fabien Lotte, co-supervision with Damien Coyle, Paul McCullagh and Liam Maguire, Ulster University)
PhD (Rapporteur): Elizabeth Rousset, INP Grenoble, February 2016 (Pascal Guitton)
PhD (Rapporteur): Sareh Saeedi, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, March 2016 (Fabien Lotte)
PhD (Rapporteur): Hind Gacem, Telecom ParisTech, April 2016 (Martin Hachet)
PhD (Rapporteur): Honyun Cho, Gwangju Institute of Science and Technology, South Korea, June 2016 (Fabien Lotte)
PhD (Rapporteur): Sebastien Pelurson, Université Grenoble Alpes, August 2016 (Martin Hachet)
PhD (Président): Brett Ridel, Université de Bordeaux, October 2016 (Pascal Guitton)
PhD (Président): Carlos Zubiaga, Université de Bordeaux, November 2016 (Pascal Guitton)
PhD (Examinateur): Emeric Baldisser, Université de Bordeaux, March 2016 (Pascal Guitton)
PhD (Examinateur): Guillaume Claude, INSA Rennes, July 2016 (Pascal Guitton)
PhD (Examinateur): Benoit Bossavit, Universidad de Navarra, Nov. 2016 (Martin Hachet)
PhD (Examinateur): Liming Yang, Ecole Centrale de Nantes, December 2016 (Pascal Guitton)
Thesis Advisory Committee: Lonni Besançon, Université Paris Saclay, June 2016 (Martin Hachet)
Thesis Advisory Committee: Sarah Buchanan, University Central Florida, July 2016 (Martin Hachet)
Science Agora, Miraikan, Tokyo, Japan, November 2016 (Fabien Lotte)
Cartopartie, Fête de la Science, Bordeaux, October 2016 (Anke Brock)
Démonsration de Teegi, Cité des Sciences, Paris, retransmission en direct sur l'Esprit Sorcier, October 2016 (Jérémy Frey, Jelena Mladenovic, Thibault Lainé)
"Contrôler par la pensée: Apprenez comment fonctionne une interface cerveau-ordinateur en jouant à Tux Race et découvrez Teegi", Cap science, October 2016 ( Jelena Mladenovic, Jérémy Frey, Thibault Lainé)
“Les Interfaces Cerveau-Ordinateur”, CogTalk, Bordeaux, October 2016 (Fabien Lotte)
TEDx UTC (Compiegne, France, 01/2016): ”Toucher et entendre les cartes géographiques” https://
"Comment le numérique nous aide à changer", Séminaire Science et développement durable, Bordeaux, June 2016 (Pascal Guitton)
"Réalité virtuelle et réalité augmentée : quelles réalités et quels futurs ?", Séminaire Photonique et réalité virtuelle, Bordeaux, November 2016 (Pascal Guitton)
"Le numérique et ses sciences dans le réel", Séminaire national « Enseigner l’option Informatique et création numérique au cycle terminal », ISENESR (Futuroscope), November 2016 (Pascal Guitton)
Pint of Science, "Interfaces cerveau-ordinateur : Entre mythes et réalité", Bordeaux, May 2016 (Camille Jeunet)
"Mythes et réalités sur l’interaction cerveau-ordinateur", Livre "5 jeunes chercheurs d'avenir" (Prix de Thèse le Monde), Editions Le Pommier (Fabien Lotte)
Inner Garden, Bordeaux Geek Festival (BGF), May 2016 (Joan Sol Roo, Julia Chatain).
Augmented Michelson Interferometer, Bordeaux Geek Festival (BGF), May 2016 (Benoit Coulais, David Furio)
Augmented Michelson Interferometer, Hall of ALPC region, June 2016 (David Furio)
Demonstration of Teegi, Colloque Robotique et Education, Bordeaux, Juin 2016 (Jérémy Frey, Thibault Lainé).
Demonstration of Teegi,, Bordeaux Geek Festival (BGF), May 2016 (Thibault Lainé)
Femmes et Sciences Deputy Board Member (« suppléante au conseil d’administration »), since 2016 (Anke Brock)
Intervention in a high school in Valence d'Agen to present our research projects and career paths , March 2016 (Anke Brock with fellow members of Femmes et Sciences Aquitaine).
"Digit’elles -témoignages de femmes scientifiques" , Fête de la Science, Bordeaux, October 2016 (Anke Brock with fellow members of Femmes et Sciences Aquitaine)
Django girls, Django workshops for young participants, April and June 2016 (Julia Chatain)
Filles et Maths, Speed meeting with female highschool students ti speak about careers in mathematics, May 2016 (Julia Chatain)
Member of Inria Comité Parité et Egalité (Pascal Guitton)
Conference on Brain-Computer Interfaces and how to become a research scientist, in a High School in Tulles, December 2016 (Fabien Lotte)
Radio interview on BCI for "L'oeuf ou la poule", on CHOQ, a Montréal Radio from UQAM (Université du Québec à Montréal), Montreal, Canada, June 2016 (Fabien Lotte, Camille Jeunet)
Radio interview on BCI and VR on Radio Canada, in Vancouver, Canada, June 2016 (Fabien Lotte)
Radio interview about BCI and the Brain and Computers Digital Media Conference on the Vancouver-based Round House Radio, June 2016 (Fabien Lotte)
Nuit des Chercheurs, Cap Sciences, Bordeaux, Septembre 2016 (Camille Jeunet)