Human-Computer Interaction (HCI) is a constantly moving field. Changes in computing technologies extend their possible uses and modify the conditions of existing ones. People also adapt to new technologies and adapt them to their own needs. Different problems and opportunities thus regularly appear. Over the recent years, though, we believe incremental news have unfortunately eclipsed fundamental HCI topics on which a lot of work remains to be done. In what follows, we summarize the essential elements of our vision and the associated long-term goals.
In the early 1960s, at a time where computers were scarce, expensive, bulky and formal-scheduled machines used for automatic computations, Engelbart saw their potential as personal interactive resources. He saw them as tools, as things we would purposefuly use to carry out particular tasks . Others at the same time had a different vision. They saw computers as partners, intelligent entities to whom we would delegate tasks. These two visions constitute the roots of today's predominant human-computer interaction paradigms, use and delegation. Our focus is on computer users and our work should ultimately benefit them. Our interest is not in solving the difficult problems related to machine understanding. It is not in what machines understand, but in what people can do with them. Instead of intelligent systems, we aim for systems supporting intelligent use and empowering people. We do not reject the delegation paradigm but clearly favor the one of tool use.
Technology is most empowering when it is transparent. But the transparent tool is not the one you cannot see, it is the one invisible in effect, the one that does not get into your way but lets you focus on the task. Heidegger used the term zuhanden (ready-to-hand) to characterize this unobtruded relation to things . Transparency of interaction is not best achieved with tools mimicking human capabilities, but with those taking full advantage of them and fitted to the context and task. Our actions towards the digital world need to be digitized, and the digital world must provide us with proper feedbacks in return. Input and output technologies pose inevitable constraints while the digital world calls for more and more sophisticated perception-action couplings for increasingly complex tasks. We want to study the means currently available for perception and action in the digital world. We understand the important role of the body on the human side, and the importance of hardware elements on the computer side. Our work thus follows a systems approach encompassing these elements and all the software layers above, from device drivers to applications.
Engelbart believed in the coevolution of humans and their tools. He was not just interested in designing a personal computer but also in changing people, to radically improve the way we manage complexity. The human side of this coevolutionary process has been largely ignored by the computing industry which has focused on the development of walk-up-and-use interfaces for novice users. As a result of this focus on initial performance, we are trapped in a “beginner mode” of interaction with a low performance ceiling . People find it acceptable to spend considerable amounts of time learning and practising all sorts of skills. We want to tap into these resources to develop digital skills. We must accept that new powerful tools might not support immediate transparent use and thus require attention. Heidegger used the term vorhanden (present-at-hand) to characterize the analytic relation to things that not only occurs when learning about them, but also when handling breakdowns, when they change or need to be adapted, or when teaching others how to use them. Analytic use is unavoidable and its interplay with transparent use is essential to tool accommodation and appropriation . We want to study this interplay.
Our research program is organized around three main themes: leveraging human control skills, leveraging human perceptual skills, and leveraging human learning skills.
Our group has developed a unique and recognized expertise in transfer functions, i.e. the algorithmic transformations of raw user input for system use. Transfer functions define how user actions are taken into account by the system. They can make a task easier or impossible and thus largely condition user performance, no matter the criteria (speed, accuracy, comfort, fatigue, etc). Ideally, the transfer function should be chosen or tuned to match the interaction context. Yet the question of how to design a function to maximize one or more criteria in a given context remains an open one, and on-demand adaptation is difficult because functions are usually implemented at the lowest possible level to avoid latency problems. Latency management and transfer function design are two problems that require cross examination to improve human performance with interactive systems. Both also contribute to the senses of initiation and control, two crucial component of the sense of agency . Our ultimate goal on these topics is to adapt the transfer function to the user and task in order to support stable and appropriate control. To achieve this, we investigate combinations of low-level (embedded) and high-level (application) ways to take user capabilities and task characteristics into account and reduce or compensate for latency in different contexts, e.g. using a mouse or a touchpad, a touch-screen, an optical finger navigation device or a brain-computer interface.
Our work under this theme concerns the physicality of human-computer interaction, with a focus on haptic perception and related technologies, and the perception of animated displays.
Vibrators have long been used to provide basic kinesthetic feedback. Other piezoceramic and electro-active polymer technologies make it possible to support programmable friction or emboss a surface, and thin, organic technologies should soon provide transparent and conformable, flexible or stretchable substrates. We want to study the use of these different technologies for static and dynamic haptic feedback from both an engineering and an HCI perspective. We want to develop the tools and knowledge required to facilitate and inform the design of future haptic interactions taking best advantage of the different technologies.
Animations are increasingly common in graphical interfaces. Beyond their compelling nature, they are powerful tools that can be used to depict dynamic data, to help understand time-varying behaviors, to communicate a particular message or to capture attention. Yet despite their popularity, they are still largely under-comprehended as cognitive aids. While best practices provide useful directions, very little empirical research examine different types of animation, and their actual benefits and limitations remain to be determined. We want to increase current knowledge and develop the tools required to best take advantage of them.
By looking at ways to leverage human control and perceptual skills, the research yet proposed mainly aims at improving perception-action coupling to better support transparent use. This third research theme addresses the different and orthogonal topic of skill acquisition and improvement. We want to move away from the usual binary distinction between “novices” and “experts” and explore means to promote and assist digital skill development in a more progressive fashion. We are interested in means to support the analytic use of computing tools. We want to help people become aware of the particular ways they use their tools, the other ways that exist for the things they do, and the other things they might do. We want to help them increase their performance by adjusting their current ways of doing, by providing new and more efficient ways, and by facilitating transitions from one way to another. We are also interested in means to foster reflection among users and facilitate the dissemination of best practices.
Mjolnir works on fundamental aspects of Human-Computer Interaction that can be applied to diverse application domains. Our 2016 research concerned desktop and touch-based interfaces with notable applications to social network analysis, genetics research, 3D environments, as well as 3D films and Virtual Reality stories.
Mathieu Nancel joined us as an Inria researcher in November.
Marcelo Wanderley joined us in February as part of the Inria International Chair program and will spend 20% of his time with us until 2020.
Ed Lank, Associate Professor at the University of Waterloo, joined us in September for a long-term visit (10+ months) funded by Région Hauts-de-France and Université Lille 1.
In partnership with Campus France and Inria, Mitacs' Globalink Research Award program sponsored the visits of three canadian students in our group: Nicholas Fellion (Carleton University), Hrim Mehta (Ontario Institute of Technology) and Aakar Gupta (University of Toronto).
Mjolnir presented seven papers and one “late-breaking work” at the ACM CHI 2016 conference in May, the most prestigious conference in our field.
The Animated transitions web site launched in March illustrates previous works by Fanny Chevalier and others on this topic (Histomages, Diffamation and Gliimpse).
“Honorable mention” (top 5% of the 2300+ submissions) from the ACM CHI 2016 conference to the following three papers:
“Egocentric analysis of dynamic networks with EgoLines”, from J. Zhao, M. Glueck, F. Chevalier, Y. Wu & A. Khan
“Modeling and understanding human routine behavior”, from N. Banovic, T. Buzali, F. Chevalier, J. Mankoff & A. Dey
“Direct manipulation in tactile displays”, from A. Gupta, T. Pietrzak, N. Roussel & R. Balakrishnan
“Springer award for best doctoral contribution” to Amira Chalbi-Neffati at the IHM 2016 conference.
Each software listed below is characterized according to the criteria for software self-assessment proposed by Inria's Evaluation Committee. Note that the only software mentioned here are those that were created or significantly modified during the year.
Libpointing is a software toolkit that provides direct access to HID pointing devices and supports the design and evaluation of pointing transfer functions . The toolkit provides resolution and frequency information for the available pointing and display devices and makes it easy to choose between them at run-time through the use of URIs. It allows to bypass the system's transfer functions to receive raw asynchronous events from one or more pointing devices. It replicates as faithfully as possible the transfer functions used by Microsoft Windows, Apple OS X and Xorg (the X.Org Foundation server). Running on these three platforms, it makes it possible to compare the replicated functions to the genuine ones as well as custom ones. The toolkit is written in C++ with Python, Java and Node.js bindings available (about 49,000 lines of code in total). It is publicly available under the GPLv2 license.
The library has been thoroughly improved in 2016. Notable changes concern the migration of code on GitHub, the set up of continuous integration and the automated release of buildings for Windows, Linux and MacOS. libpointing can be easily installed using apt-get command on Linux and and Homebrew and Macport package installers on MacOS. New features like the estimation of the input frequency have been added and different demos have been developed. Code has been re-factored and various bugs fixed.
Web site: http://
Software characterization: [A-3] [SO-3] [SM-2] [EM-2
Liblag is a software toolkit designed to support the comparison of latency compensation techniques. The toolkit notably includes a playground application that allows to compare different trajectory prediction algorithms on desktop (OS X, Ubuntu and Windows) and mobile (iOS and Android) systems. The source code for this toolkit (about 8,500 lines of code) is only available to Turbotouch partners for now.
Sébastien Poulmane was recruited in May as an engineer on the TurboTouch project. He has been contributing to refactor the code, integrate new input devices and new prediction techniques and also develop associated demos and experiments.
Software characterization: [A-1] [SO-4] [SM-1] [EM-2] [SDL-1]
As part of the work reported in , we implemented our mouse-based method for measuring end-to-end latency using Java/Swing, C++/GLUT, C++/Qt and JavaScript/HTML5. We also wrote Python scripts to parse the logs generated by these implementations in order to compare them. This software (about 2,500 lines of code) was made available in 2016 on a public git repository. The online interactive demo has been improved to collect anonymous latency measurement data from users and integrate libpointing in order to get information about the input and output devices connected. A native Android version has also been developed.
Web site: http://
Software characterization: [A-1] [SO-4] [SM-1] [EM-2] [SDL-1]
TAN stands for Transitions animées, i.e. Animated transitions. This web site illustrates some of our past research on this topic. It combines videos and live demonstrations of Histomages, an image editing tool that associates pixel and color space; Diffamation, an animation tool to follow and understand the modifications made to a document over time; and Gliimpse, a markup language editor (e.g. HTML, LaTeX, Wiki) to instantly switch from source code to the document it produces and vice versa. The source code for the three demonstrators (about 87,000 lines of Java and JavaScript) is not distributed for the moment.
Web site: http://
Software characterization: [A-4] [SO-2] [SM-3] [EM-2] [SDL-4]
InspectorWidget is an HTML5/nodejs/C++ software suite that can be used by an experimenter to track and analyze users' behaviors in closed interactive software. The suite has a recording module that records users' display and captures low-level events while she carries her task, and an annotation module that combines OCR and low-level inputs analysis so the experimenter post-annotate users' activity afterwards. InspectorWidget is cross-platform, open-source and publicly available under the GPLv3 license. New features, notably recording and exploiting accessibility API, are currently under development in order to be tested and added to the software suite.
Web site: https://
Software characterization: [A-2
The following sections summarize our main results of the year. For a complete list, see the list of publications at the end of this report.
The development of robust methods to identify which finger is causing each touch point, called "finger identification," will open up a new input space where interaction designers can associate system actions to different fingers . However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord”. We presented empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we developed design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification .
Brain-Computer Interfaces (BCIs) are much less reliable than other input devices, with error rates ranging from 5% up to 60%. To assess the subjective frustration, motivation, and fatigue of users when confronted to different levels of error rate, we conducted an BCI experiment in which it was artificially controlled. Our results show that a prolonged use of BCI significantly increases the perceived fatigue, and induces a drop in motivation . We also found that user frustration increases with the error rate of the system but this increase does not seem critical for small differences of error rate. For future BCIs, we thus advise to favor user comfort over accuracy when the potential gain of accuracy remains small.
We have also investigated if the stimulation used for training an SSVEP-based BCI have to be similar to the one used in fine for interaction. We recorded 6-channels EEG data from 12 subjects in various conditions of distance between targets, and of difference in color between targets. Our analysis revealed that the stimulation configuration used for training which leads to the best classification accuracy is not always the one which is closest to the end use configuration . We found that the distance between targets during training is of little influence if the end use targets are close to each other, but that training at far distance can lead to a better accuracy for far distance end use. Additionally, an interaction effect is observed between training and testing color: while training with monochrome targets leads to good performance only when the test context involves monochrome targets as well, a classifier trained on colored targets can be efficient both for colored and monochrome targets. In a nutshell, in the context of SSVEP-based BCI, training using distant targets of different colors seems to lead to the best and more robust performance in all end use contexts.
Touch systems have a delay between user input and corresponding visual feedback, called input “latency” (or “lag”). Visual latency is more noticeable during continuous input actions like dragging, so methods to display feedback based on the most likely path for the next few input points have been described in research papers and patents. Designing these “next-point prediction” methods is challenging, and there have been no standard metrics to compare different approaches. We introduced metrics to quantify the probability of 7 spatial error “side-effects” caused by next-point prediction methods . Types of side-effects were derived using a thematic analysis of comments gathered in a 12 participants study covering drawing, dragging, and panning tasks using 5 state-of-the-art next-point predictors. Using experiment logs of actual and predicted input points, we developed quantitative metrics that correlate positively with the frequency of perceived side-effects. These metrics enable practitioners to compare next-point predictors using only input logs.
Interface designers, HCI researchers or usability experts often need to collect information regarding usage of interactive systems and applications in order to interpret quantitative and behavioral aspects from users – such as our study on the use of trackpads described before – or to provide user interface guidelines. Unfortunately, most existing applications are closed to such probing methods: source code or scripting support are not always available to collect and analyze users' behaviors in real world scenarios.
InspectorWidget is an open-source cross-platform application we designed to track and analyze users' behaviors in interactive software. The key benefits of this application are: 1) it works with closed applications that do not provide source code nor scripting capabilities; 2) it covers the whole pipeline of software analysis from logging input events to visual statistics through browsing and programmable annotation; 3) it allows post-recording logging; and 4) it does not require programming skills. To achieve this, InspectorWidget combines low-level event logging (e.g. mouse and keyboard events) and high-level screen capturing and interpretation features (e.g. interface widgets detection) through computer vision techniques.
Trackpads (or touchpads) allow to control an on-screen cursor with finger movements on their surface. Recent models also support force sensing and multi-touch interactions, which make it possible to scroll a document by moving two fingers or to switch between virtual desktops with four fingers, for example. But despite their widespread use, little is known about how users interact with them, and which gestures they are most familiar with. To better understand this, we conducted a three-steps field study with Apple Macbook's multi-touch trackpads.
The first step of our study consisted in collecting low-level interaction data such as contact points with the trackpad and the multi-touch gestures performed while interacting. We developed a dedicated interaction logging application that we deployed on the workstation of 11 users for a duration of 14 days, and collected a total of over 82 millions contact points and almost 220 000 gestures. We then investigated finger chords (i.e., fingers used) and hand usage when interacting with a trackpad. In that purpose, we designed a dedicated mirror stand that can be easily positioned in front of the laptop's embedded web camera to divert its capturing field (Figure , left). This mirror stand is combined with a background application taking photos when a multi-finger gesture is performed. We deployed this setup on the computer of 9 users for a duration of 14 days. Finally, we deployed a system preference collection application to gather the trackpad system preferences (such as transfer function and gestures associated) of 80 users. Our main findings are that touch contacts on the trackpad are performed on a limited sub-surface and are relatively slow (Figure , right); that the consistency of user finger chords varies depending on the frequency of a gesture and the number of fingers involved; and that users tend to rely on the default system preferences of the trackpad .
The egocentric analysis of dynamic networks focuses on discovering the temporal patterns of a subnetwork around a specific central actor, i.e. an ego-network. These types of analyses are useful in many application domains, such as social science and business intelligence, providing insights about how the central actor interacts with the outside world. EgoLines is an interactive visualization we designed to support the egocentric analysis of dynamic networks
Cross-sectional phenotype studies are used by genetics researchers to better understand how phenotypes vary across patients with genetic diseases, both within and between cohorts. Analyses within cohorts identify patterns between phenotypes and patients, e.g. co-occurrence, and isolate special cases, e.g. potential outliers). Comparing the variation of phenotypes between two cohorts can help distinguish how different factors affect disease manifestation, e.g. causal genes, age of onset.). PhenoStacks is a novel visual analytics tool we designed to support the exploration of phenotype variation within and between cross-sectional patient cohorts
Human routines are blueprints of behavior, which allow people to accomplish purposeful repetitive tasks at many levels, ranging from the structure of their day to how they drive through an intersection. People express their routines through actions that they perform in the particular situations that triggered those actions. An ability to model routines and understand the situations in which they are likely to occur could allow technology to help people improve their bad habits, inexpert behavior, and other suboptimal routines. However, existing routine models do not capture the causal relationships between situations and actions that describe routines. Byproducts of an existing activity prediction algorithm can be used to model those causal relationships in routines . We applied this algorithm on two example datasets, and showed that the modeled routines are meaningful — that they are predictive of people’s actions and that the modeled causal relationships provide insights about the routines that match findings from previous research. Our approach offers a generalizable solution to model and reason about routines. We show that the extracted routine patterns are at least as predictive of behaviors in the two behavior logs as the baseline we establish with existing algorithms.
To make the routine behavior models created using our approach accessible to participants and allow them to investigate the extracted routine patterns, we developed a simple visualization tool. To maintain a level of familiarity, we base our visual encoding of routine behavior elements on a traditional visual representation of an MDP as a graph (Figure ). Our MDP graph contains nodes representing states (as circles) and actions (as squares), directed edges from state nodes to action nodes (indicating possible actions people can perform in those states), and directed edges from actions to states (indicating state transitions for any given state and action combination).
User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. Annotation Graphs are dynamic graph visualizations that enable meta-analysis of data based on user-authored annotations
Animations are increasingly used in interactive systems in order to enhance the usability and aesthetics of user interfaces. While animations are proven to be useful in many cases, we still find defective ones causing many problems, such as distracting users from their main task or making data exploration slower. The fact that such animations still exist proves that animations are not yet very well understood as a cognitive aid, and that we have not yet definitely decided what makes a well designed one. Our work on this topic aims at better understanding the different aspects of animations for user interfaces and exploring new methods and guidelines for designing them.
From bouncing icons that catch attention, to transitions helping with orientation, to tutorials, animations can serve numerous purposes. In
We have also studied different aspects of animations for visual analysis tasks. We have worked on the design of a new model for animated transitions, explored certain aspects of visual grouping for these transitions, and studied the impact of their temporal structure on data interpretation. These works, while still in progress, have been presented at the IHM doctoral consortium .
In virtual environments, interacting directly with our hands and fingers greatly contributes to the sense of immersion, especially when force feedback is provided for simulating the touch of virtual objects. Yet, common haptic interfaces are unfit for multi-finger manipulation and only costly and cumbersome grounded exoskeletons do provide all the efforts expected from object manipulation. To make multi-finger haptic interaction more accessible, we propose to combine two affordable haptic interfaces into a bimanual setup named DesktopGlove . With this approach, each hand is in charge of different components of object manipulation: one commands the global motion of a virtual hand while the other controls its fingers for grasping. In addition, each hand is subjected to forces that relate to its own degrees of freedom so that users perceive a variety of haptic effects through both of them. Our results show that (1) users are able to integrate the separated degrees of freedom of DesktopGlove to efficiently control a virtual hand in a posing task, (2) DesktopGlove shows overall better performance than a traditional data glove and is preferred by users, and (3) users considered the separated haptic feedback realistic and accurate for manipulating objects in virtual environments.
We also investigated how head movements can serve to change the viewpoint in 3D applications, especially when the viewpoint needs to be changed quickly and temporarily to disambiguate the view. We studied how to use yaw and roll head movements to perform orbital camera control, i.e., to rotate the camera around a specific point in the scene . We reported on four user studies. Study 1 evaluated the useful resolution of head movements and study 2 informed about visual and physical comfort. Study 3 compared two interaction techniques, designed by taking into account the results of the two previous studies. Results show that head roll is more efficient than head yaw for orbital camera control when interacting with a screen. Finally, Study 4 compared head roll with a standard technique relying on the mouse and the keyboard. Moreover, users were allowed to use both techniques at their convenience in a second stage. Results show that users prefer and are faster (14.5%) with the head control technique.
The resurgence of stereoscopic and Virtual Reality (VR) media has motivated filmmakers to evolve new stereo- and VR-cinematic vocabularies, as many principles for stereo 3D film and VR story are unique. Concepts like plane separation, parallax position, and depth budgets in stereo, and presence, active experience, blocking and stitching in VR are missing from early planning due to the 2D nature of existing storyboards. Motivated to foresee difficulties exclusive to stereoscopy and VR, but also to exploit the unique possibilities of these medium, the 3D and VR cinematography communities encourages filmmakers to start thinking in stereo/VR as early as possible. Yet, there are very few early stage tools to support the ideation and discussion of a stereoscopic film or a VR story. Traditional solutions for early visual development and design, in current practices, are either strictly 2D or require 3D modeling skills, producing content that is consumed passively by the creative team.
To fill the gap in the filmmakers' toolkit, we proposed Storeoboard , a system for stereo-cinematic conceptualization, via storyboard sketching directly in stereo (Figure ); and a novel multi-device system supporting the planning of virtual reality stories. Our tools are the first of their kind, allowing filmmakers to explore, experiment and conceptualize ideas in stereo or VR early in the film pipeline, develop new stereo- and VR-cinematic constructs and foresee potential difficulties. Our solutions are the design outcome of interviews and field work with directors, stereographers, storyboard artists and VR professionals. Our core contributions are thus: 1) a principled approach to the design and development of the first stereoscopic storyboard system that allows the director and artists to explore both the stereoscopic space and concepts in real-time, addressing key HCI challenges tied to sketching in stereoscopy; and 2) a principled survey of the state of the art in cinematic VR planning to design the first multi-device system that supports a storyboard workflow for VR film. We evaluated our tools with focus group and individual user studies with storyboard artists and industry professionals. In , we also report on feedback from the director of a live action, feature film on which Storeoboard was deployed. Results suggest that our approaches provide the speed and functionality needed for early stage planning, and the artifacts to properly discuss steroscopic and VR films.
Tactile displays have predominantly been used for information transfer using patterns or as assistive feedback for interactions. With recent advances in hardware for conveying increasingly rich tactile information that mirrors visual information, and the increasing viability of wearables that remain in constant contact with the skin, there is a compelling argument for exploring tactile interactions as rich as visual displays. As Direct Manipulation underlies much of the advances in visual interactions, we introduced Direct Manipulation-enabled Tactile display . We defined the concepts of a tactile screen, tactile pixel, tactile pointer, and tactile target which enable tactile pointing, selection and drag & drop. We built a proof of concept tactile display and studied its precision limits. We further developed a performance model for DMTs based on a tactile target acquisition study, and studied user performance in a real-world DMT menu application. The results show that users are able to use the application with relative ease and speed.
We have also explored vibrotactile feedback with wearable devices such as smartwatches and activity trackers, which are becoming prevalent. These devices provide continuous information about health and fitness, and offer personalized progress monitoring, often through multimodal feedback with embedded visual, audio, and vibrotactile displays. Vibrations are particularly useful when providing discreet feedback, without users having to look at a display or anyone else noticing, thus preserving the flow of the primary activity. Yet, current use of vibrations is limited to basic patterns, since representing more complex information with a single actuator is challenging. Moreover, it is unclear how much the user’s current physical activity may interfere with their understanding of the vibrations. We addressed both issues through the design and evaluation of ActiVibe, a set of vibrotactile icons designed to represent progress through the values 1 to 10 . We demonstrate a recognition rate of over 96% in a laboratory setting using a commercial smartwatch. ActiVibe was also evaluated in situ with 22 participants for a 28-day period. We show that the recognition rate is 88.7% in the wild and give a list of factors that affect the recognition, as well as provide design guidelines for communicating progress via vibrations.
Autoscroll, also known as edge-scrolling, is a common interaction technique in graphical interfaces that allows users to scroll a viewport while in dragging mode: once in dragging mode, the user moves the pointer near the viewport's edge to trigger an “automatic” scrolling. In spite of their wide use, existing autoscroll methods suffer from several limitations . First, most autoscroll methods over-rely on the size of the control area, that is, the larger it is, the faster scrolling rate can be. Therefore, the level of control depends on the available distance between the viewport and the edge of the display, which can be limited. This is for example the case with small displays or when the view is maximized. Second, depending on the task, the users' intention can be ambiguous (e.g. dragging and dropping a file is ambiguous as the user's target may be located within the initial viewport or in a different one on the same display). To reduce this ambiguity, the size of the control area is drastically smaller for drag-and-drop operations which consequently also affects scrolling rate control as the user has a limited input area to control the scrolling speed.
We explored how force-sensing input, which is now available on commercial devices such as the Apple Magic Trackpad 2 or iPhone 6S, can be used to overcome the limitations of autoscroll. Indeed, force-sensing is an interesting candidate because: 1) users are often already applying a (relatively soft) force on the input device when using autoscroll and 2) varying force on the input device does not require to move the pointer, thus making it possible to offer control to the user while using a small and consistent control area regardless of the task and the device. We designed and proposed ForceEdge, a novel interaction technique mapping the force applied on a trackpad to the autoscrolling rate . We implemented a software interface that can be used to design different transfer functions that map the force to autoscrolling rate and test these mappings for text selection and drag-and-drop tasks. Our pilot studies showed encouraging results and future work will focus on conducting more robust evaluations, as well as testing ForceEdge on mobile devices.
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. We investigated the combination of gaze and BCIs and proposed a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces .
Desktop workstation remains the most common setup for office work tasks such as text editing, CAD, data analysis or programming. While several studies investigated how users interact with their devices (e.g. pressing keyboard keys, moving the cursor, etc.), it is not clear how they arrange their devices on the desk and whether we can leverage existing users’ behaviors.
We designed the LivingDesktop , an augmented desktop with devices capable of moving autonomously. The LivingDesktop can control the position and orientation of the mouse, keyboard and monitors, offering different degrees of control for both the system (autonomous, semi-autonomous) and the user (manual, semi-manual) as well as different perceptive qualities (visual, haptic) thanks to a large range of device motions. We implemented a proof-of-concept of the LivingDesktop combining rail, robotic base and magnetism to control the position and orientation of the devices. This new setup presents several interesting features: (1) it improves ergonomics by continuously adjusting the position of its devices to help users adopting ergonomic postures and avoiding static postures for extended periods; (2) it facilitates collaborative works between local (e.g. located in the office) and remote co-workers; (3) it leverages context by reacting to the position of the user in the office, the presence of physical objects (e.g. tablets, food) or users’ current activity in order to maintain a high level of comfort; (4) it reinforces physicality within the desktop workstation to increase immersion.
We conducted a scenario evaluation of the LivingDesktop. Our results showed the perceived usefulness of collaborative and ergonomics applications, as well as how it inspired our participants to elaborate novel application scenario, including social communication or accessibility.
Human-computer interactions are greatly affected by the latency between the human input and the system visual response and the compensation of this latency is an important problem for the HCI community. We have developed a simple forecasting algorithm for latency compensation in indirect interaction using a mouse, based on numerical differentiation. Several differentiators were compared, including a novel algebraic version, and an optimized procedure was developed for tuning the parameters of the algorithm. The efficiency was demonstrated on real data, measured with a 1ms sampling time. These results are developed in and patent has been filed on a subsequent technique for latency compensation .
Mock-up of a tool for dynamic media pre-production: we are currently working with the HCOP holding company on the design of new tools for the pre-production of dynamic medias such as videos, e-learning animations, etc. This work involves interviews of professional video producers, the identification of opportunities for tools that could help them, and the production of descriptions and mock-ups of these tools.
Recognition and interpretation of piano fingering: we have started a new collaboration with Hugues Leclère, concert pianist and professor at the “Conservatoire à rayonnement régional de Paris”. Our objective is to investigate new sensing technology and interpretation algorithms for accurate live recognition of piano fingerings. Ultimately, this technology would ease the transcription of fingerings directly onto scores during play and support both the learning and training of piano fingerings, given appropriate visualization and interaction techniques that we will investigate in a second phase of this collaboration.
The goal of this project is the design and implementation of novel cross-device systems and interaction techniques that minimize the cost of divided attention. Of particular interest are notification systems on smart watches and in distributed computing systems. More precisely, we design cross-device activity and notifications monitor that will intercept external (e.g. new e-mail) and internal (e.g. a video editing software completed an export) notifications and distribute them to the device users are currently wearing/interacting with in order to minimize notification redundancy.
Partner: University College London Interaction Centre (United Kingdom).
Touch-based interactions with computing systems are greatly affected by two interrelated factors: the transfer functions applied on finger movements, and latency. This project aims at transforming the design of touch transfer functions from black art to science to support high-performance interactions. We are working on the precise characterization of the functions used and the latency observed in current touch systems. We are developing a testbed environment to support multidisciplinary research on touch transfer functions and will use this testbed to design latency reduction and compensation techniques, and new transfer functions.
Partners: Inria Lille's NON-A team and the “Perceptual-motor behavior group” from the Institute of Movement Sciences.
This project studies the fine motor control of patients with Parkinson disease in an ecological environment, at home, without the presence of experimenters. Through longitudinal studies, we collect raw information from pointing devices to create a large database of pointing behavior data. From the analysis of this big dataset, the project aims at inferring the individual's disease progression and influence of treatments.
Partners: the “Perceptual-motor behavior group” from the Institute of Movement Sciences and Hôpital de la Timone.
Web site: http://
The goal of this large-scale initiative is to design a new generation of non-invasive Brain-Computer Interfaces (BCI) that are easier to appropriate, more efficient, and suited for a larger number of people.
Partners: Inria's ATHENA, NEUROSYS, POTIOC, HYBRID & DEMAR teams, Centre de Recherche en Neurosciences de Lyon (INSERM) and INSA Rouen.
Web site: https://
The main objective of this project is to develop and evaluate new types of haptic actuators printed on advanced Thin, Organic and Large Area Electronics (TOLAE) technologies for use in car dashboards. These actuators are embedded in plastic molded dashboard parts. The expected outcome is a marketable solution for haptic feedback on curved interactive surfaces.
Partners: CEA (coordinator), Inria Rennes' HYBRID team, Arkema, Bosch, Glasgow University, ISD, Walter Pack, Fundacion Gaiker.
Web site: http://
The goal of the project is the design and implementation of a musical interaction design workbench to facilitate the exploration and definition of new interactive technologies for both musical creation and performance.
Partner: Inria Saclay's EXSITU team and the Input Devices and Music Interaction Laboratory (IDMIL) from the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill University, Canada.
Web site: http://
Visiting scholars:
Marcelo Wanderley, Professor at McGill University, Canada (3 one week visits in April, October & December)
Edward Lank, Associate Professor at the University of Waterloo, Canada (since September)
Daniel Wigdor, Associate Professor at the University of Toronto, Canada (April 2016)
Baptiste Caramiaux, Post-Doctoral researcher at McGill University, Canada, & IRCAM (December)
Internships:
Filipe Calegario, PhD student at McGill University, Canada (January)
Nicholas Fellion, Master's student at Carleton University, Canada (from January to April)
Aakar Gupta, PhD student at the University of Toronto, Canada (from June to September)
Hrim Mehta, PhD student at the Ontario Institute of Technology, Canada (from May to August)
Anastasia Kuzminykh, PhD student at the University of Waterloo, Canada (from October to December)
VISAP (IEEE): Fanny Chevalier (general co-chair)
Rencontre Inria Industrie “Interactions avec les objets et services numériques”: Nicolas Roussel (scientific chair)
IHM: Fanny Chevalier (Work-in-Progress co-chair), Stéphane Huot (AFIHM's 20th anniversary co-chair)
CHI (ACM): Géry Casiez (associate chair), Nicolas Roussel (associate chair)
UIST (ACM): Fanny Chevalier (associate chair), Géry Casiez (associate chair)
Infovis (IEEE): Fanny Chevalier (PC member)
MobileHCI (ACM): Sylvain Malacria (associate chair)
IHM: Fanny Chevalier (PC member)
SIGGRAPH (ACM): Fanny Chevalier
UIST (ACM): Sylvain Malacria, Thomas Pietrzak
IHM: Sylvain Malacria
GI (ACM): Sylvain Malacria
TEI (ACM): Thomas Pietrzak
MobileHCI (ACM): Thomas Pietrzak
Haptic Symposium (IEEE): Thomas Pietrzak
Transactions on Computer Human Interaction (ACM): Sylvain Malacria, Géry Casiez, Thomas Pietrzak
Transactions on Visualization and Computer Graphics (IEEE): Fanny Chevalier
Information Design Journal (John Benjamins Publishing Company): Fanny Chevalier
Interacting with Computers (Oxford Journals): Thomas Pietrzak
Direct spacetime sketching and editing of visual media, Microsoft Research, Redmond: Fanny Chevalier
Opportunities and limits of Adaptive User Interfaces, Technology and Routines workshop, Bavarian Academy of Sciences and Humanities, Munich, Germany: Sylvain Malacria
Towards the full experience of playing drums on a virtual drumkit, Symposium on Haptic and Music, McGill University, Montréal, Canada: Thomas Pietrzak
PICOM competitiveness cluster: Nicolas Roussel (scientific board member)
Agence Nationale de la Recherche: Fanny Chevalier (committee member for the JCJC, PRCE and PRC programs)
European Research Council: Géry Casiez (reviewer for the Starting Grant program)
For Inria:
Evaluation committee: Nicolas Roussel (member)
Gender equity and equality committee: Nicolas Roussel (member)
Strategic orientation committee for the information system (COSS scientifique): Nicolas Roussel (member)
International relations working group (COST-GTRI): Stéphane Huot (member)
For Inria Lille – Nord Europe:
Scientific officer (délégué scientifique): Nicolas Roussel
Research jobs committee (CER): Fanny Chevalier (member), Nicolas Roussel (member)
Technical development committee (CDT): Nicolas Roussel (member)
Consultative committee (Comité de centre): Fanny Chevalier (member)
Operational legal and ethical risk assessment committee (COERLE): Stéphane Huot (local correspondent)
Support to researchers (accompagnement des chercheurs): Stéphane Huot (adviser)
Activity reports (RAweb): Nicolas Roussel (local correspondents)
For the CRIStAL lab of Univ. Lille 1:
Laboratory council: Géry Casiez (elected member)
Association Francophone d'Interaction Homme-Machine (AFIHM): Géry Casiez (board member, president), Stéphane Huot (board member and scientific council member), Thomas Pietrzak (board member)
Inria’s eligibility jury for senior researcher positions (DR2): Nicolas Roussel
Inria’s eligibility jury for junior researcher positions (CR2) in Saclay: Nicolas Roussel
ENAC's hiring committee for an assistant professor position: Nicolas Roussel
Polytech Nantes' hiring committee for a Computer Science Assistant Professor position: Fanny Chevalier (member)
Master Informatique Image Vision Interaction (IVI): Géry Casiez (16 hrs), Fanny Chevalier (16 hrs), Thomas Pietrzak (16 hrs), NIHM : Nouvelles Interactions Homme-Machine, M2, Univ. Lille 1
Master Informatique: Thomas Pietrzak (50 hrs), Sylvain Malacria (30 hrs), Nicolas Roussel (8 hrs), IHM : Interactions Homme-Machine, M1, Univ. Lille 1
Master Informatique: Géry Casiez, Multi-Touch Interaction, 10 hrs, M1, Univ. Lille 1
Master: Géry Casiez (6 hrs), Thomas Pietrzak (10.5 hrs), 3DETech : 3D Digital Entertainment Technologies, M2 level, Télécom Lille
Master Sciences Humaines et Sociales Réseaux Sociaux Numériques (RSN): Fanny Chevalier, Statistiques et visualisation de données, 8hrs, M2, Univ. Lille 1
Master Sciences Humaines et Sociales Réseaux Sociaux Numériques (RSN): Fanny Chevalier, Outils et pratiques numériques, 6hrs, M1, Univ. Lille 1
Master Computing for Medicine: Fanny Chevalier, Information Visualization in Medicine, 1hr, Masters, Univ. of Toronto
Graduate course in Computer Sciences: Fanny Chevalier, Topics in Interactive Computing: Information Visualization, 16hrs, MSc and PhD, Univ. of Toronto
Licence Informatique: Thomas Pietrzak (45 hrs), Logique, L3, Univ. Lille 1
Licence Informatique: Thomas Pietrzak (36 hrs), Automates et Langages, L3, Univ. Lille 1
Licence Sciences pour l'Ingénieur (SPI): Sylvain Malacria, Introduction à l'Interaction Homme Machine, 26hrs, L3, Institut Villebon Georges Charpak
Research Masters in Computer Science & HCID Masters (EIT ICT Labs European Master in Human-Computer Interaction and Design): Stéphane Huot, Advanced Programming of Interactive Systems, 13.5 hrs, M1 & M2, Univ. Paris Saclay
DUT Informatique: Géry Casiez, IHM, 72 hrs, 1st year, IUT A de Lille - Univ. Lille 1
DUT Informatique: Géry Casiez, Algorithmique, 80 hrs, 1st year, IUT A de Lille - Univ. Lille 1
DUT Informatique: Géry Casiez, Modélisation mathématique, 14 hrs, 2nd year, IUT A de Lille - Univ. Lille 1
DUT Informatique: Géry Casiez, Projets, 18 hrs, 2nd year, IUT A de Lille - Univ. Lille 1
PhD in progress: Hakim Si Mohammed, Improving Interaction based on a Brain-Computer Interface, started October 2016, advised by Anatole Lecuyer, Géry Casiez & Nicolas Roussel (in Rennes)
PhD in progress: Nicole Ke Chen Pong, Comprendre et améliorer le vocabulaire interactionel des utilisateurs, started October 2016, advised by Nicolas Roussel & Sylvain Malacria
PhD in progress: Thibault Raffaillac, Languages and system infrastructure for interaction, started October 2015, advised by Stéphane Huot & Stéphane Ducasse
PhD in progress: Amira Chalbi-Neffati, Comprendre et mieux concevoir les animations dans le contexte des interfaces graphiques, started October 2014, advised by Nicolas Roussel & Fanny Chevalier
PhD in progress: Jeronimo Barbosa, Design and Evaluation of Digital Musical Instruments, McGill University, started in 2013, advised by Marcelo Wanderley & Stéphane Huot (since 2016)
PhD in progress: Alexandre Kouyoumdjian, Multimodal selection of numerous moving targets in large visualization platforms: application to interactive molecular simulation, started October 2013, advised by Stéphane Huot, Patrick Bourdot & Nicolas Ferey (in Saclay)
PhD in progress: Justin Mathew, New visualization and interaction techniques for 3D spatial audio, started June 2013, advised by Stéphane Huot & Brian Katz (in Saclay)
PhD: Oleksandr Zinenko, Interactive code restructuring, Univ. Paris Saclay, defended in November 2016, advised by Stéphane Huot & Cédric Bastoul (Université de Strasbourg)
PhD: Alix Goguey, Understanding and designing touch interaction using finger identification, Univ. Lille 1, defended in October 2016 , advised by Géry Casiez.
PhD: Andéol Evain, Optimizing the use of SSVEP-based brain-computer interfaces for human-computer interaction, defended in December 2016, advised by Anatole Lecuyer, Géry Casiez & Nicolas Roussel (in Rennes)
Michael Glueck (PhD, University of Toronto): Fanny Chevalier
Nicole Sultanum (PhD, University of Toronto): Fanny Chevalier
Ignacio Avellino (PhD, Université Paris Saclay): Nicolas Roussel
Morten Esbensen (PhD, IT University of Copenhagen): Nicolas Roussel, reviewer
Élisabeth Rousset (PhD, Université de Grenoble): Géry Casiez, reviewer
Sébastien Pelurson (PhD, Université de Grenoble): Géry Casiez, reviewer
Waseem Safi (PhD, Université Cean Normandie): Thomas Pietrzak, examiner
“Vers une meilleure appréciation des algorithmes qui nous entourent”, talk for Rencontres Inria Industrie, Plaine Image, Tourcoing, November: Fanny Chevalier
Prospective talk on HCI as part of BeyondLab's “Aux portes du futur” evening, November: Nicolas Roussel
“Chorégraphie des Transitions animées”, talk for CRIStAL journée Recherche Innovation Création, Lille, October: Amira Chalbi-Neffati
“Les objets deviennent intelligents, et nous ?”, talk at Lycée Diderot (Carvin) as part of Inria Lille's “Chercheurs Itinérants” program, October: Nicolas Roussel
“Efficacité et performance des transitions animées”, talk at EuraTechnologies (plateau Inria), March: Fanny Chevalier
Presentation of the Mjolnir team, talk at EuraTechnologies (plateau Inria), January: Nicolas Roussel