EN FR
EN FR


Section: Research Program

Objective 2: Creating interactive systems

Our objective here is to create interactive systems and design interaction techniques dedicated to the completion of interaction tasks. We divide our work into three main categories:

  • Interaction techniques based on existing Input/Output (IO) devices.

  • New IO and related techniques.

  • BCI and physiological computing.

Interaction techniques based on existing Input/Output (IO) devices

When using desktop IO (i.e., based on mouse/keyboards/monitors), a big challenge is to design interaction techniques that allow users to complete 3D interaction tasks. Indeed, the desktop IO space that is mainly dedicated to the completion of 2D interaction task is not well suited to 3D content and, consequently, 3D user interfaces need to be designed with a great care. We have proposed a state of the art that describes the major approaches and techniques in this area. In the past few years, we have been particularly interested in the problem of interaction when the 3D content is displayed on a touchscreen. Indeed, standard (2D) HCI has evolved from mouse to touch input, and numerous research projects have been conducted. At the opposite, in 3D, very little work has been proposed. We have contributed to move desktop 3D UIs from the mouse to the touch paradigm; what we used to do with mice in front of a screen does not work well on touch devices anymore. To face this problem, we have focused on touch-based 3D UIs. The first work brought tBox , a new 3D transformation widget designed for manipulating 3D objects on touch screens. In a second work, we have explored several strategies for navigating in 3D digital cities from touch inputs in collaboration with our industrial partners Vectuel and Mappy/PagesJaunes.

New IO and related techniques

In Potioc, we are interested in exploring new IO modalities that may make interaction easier, more engaging and motivating. In the past few years, we have designed new interactive systems that exploit unconventional IO modalities. Stereoscopic visualization has a great potential for the understanding of 3D content. On the other hand, interaction with such stereoscopic environments is generally diffcult. To face this problem, we have conceived Toucheo, a new system that exploits stereoscopic visualization and touch input. We have also contributed to the design of a system that exploits 3D spatial and touch input in a stereoscopic 3D environment. In the scope of immersive VR, we have also proposed some extensions of the current IO space. In particular, we presented a new input device that has been specifically designed to play music in an immersive VR environment. It mixes graphical and percussion based interaction. Another example is the SIMCA project where we have build a gateway simulator composed of numerous screens, video projectors and tracking systems. Tangible interaction has also been a subject of interest for us. Indeed, we believe that manipulating directly physical objects for interacting with the digital world has a great potential, in particular when the general public is targeted. In this direction, we have notably proposed PapARt, a system that mixes physical drawing and augmented reality. With this system, the computer disappears, and the user interacts with the digital content as he or she would do with physical content. Another example is Rouages where musicians play with physical midi instruments that are augmented with virtual information to provide rich experiences to the audience. Our more recent contribution is Teegi, a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact.

BCI and physiological computing

As part of our research on the design of interactive systems based on physiological signals, and in particular brain signals (for BCI design) we conducted a number of research projects on EEG signal processing and classification. Indeed, in order to design practical BCI that can be used outside the lab, there is a need for robust EEG signal processing algorithm with the long-term objective to correctly recognise the users' mental commands (and thus EEG patterns) anytime and anywhere. To do so, we first explored and designed new features to represent EEG signals. We notably explored multifractal cumulants and predictive complexity features, waveform length features with an optimal spatial filter that we designed, as well as phase-locking value features (i.e., functional connectivities between brain areas), also with an optimal spatial filter we designed. All such features proved useful to classify EEG signals, and, more importantly, increased BCI classification performances (by 2 to 4% on average) when combined with the gold standard features, namely, band power features. To make BCI more robust to noise and non-stationarities, we proposed to integrate a-priori knowledge into machine learning algorithms. Such knowledge represents any information we have about what should be a good filter or classifier for instance. We successfully demonstrated this approach to learn robust and stable spatial filters. Finally, we worked on reducing the long and tedious BCI calibration times, by making the design of a BCI possible from very few training EEG signals. To do so, we proposed to generate artificial EEG signals from the few EEG trials initially available, in order to augment the training set size in a relevant way. This enabled us to calibrate BCI systems with 2 to 3 times less data than standard designs, while maintaining similar classification performances, hence effectively reducing the calibration time by 2 or 3.