EN FR
EN FR
LOKI - 2022
2022
Activity report
Project-Team
LOKI
RNSR: 201822657D
In partnership with:
Université de Lille
Team name:
Technology & Knowledge for Interaction
In collaboration with:
Centre de Recherche en Informatique, Signal et Automatique de Lille
Domain
Perception, Cognition and Interaction
Theme
Interaction and visualization
Creation of the Project-Team: 2019 July 01

Keywords

Computer Science and Digital Science

  • A2.1.3. Object-oriented programming
  • A2.1.12. Dynamic languages
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.3. Haptic interfaces
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.8. 3D User Interfaces
  • A5.1.9. User and perceptual studies
  • A5.2. Data visualization
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.7.2. Music

Other Research Topics and Application Domains

  • B2.8. Sports, performance, motor skills
  • B6.1.1. Software engineering
  • B9.2.1. Music, sound
  • B9.4. Sports
  • B9.5.1. Computer science
  • B9.5.6. Data science
  • B9.6.10. Digital humanities
  • B9.8. Reproducibility

1 Team members, visitors, external collaborators

Research Scientists

  • Stéphane Huot [Team leader, Inria, Senior Researcher, HDR]
  • Bruno Fruchard [Inria, Researcher, from Oct 2022]
  • Sylvain Malacria [Inria, Researcher]
  • Mathieu Nancel [Inria, Researcher]
  • Marcelo M. Wanderley [Inria & McGill University (Canada), Advanced Research Position, from Dec 2022]

Faculty Members

  • Géry Casiez [Université de Lille, Professor, Junior member of Institut Universitaire de France, HDR]
  • Thomas Pietrzak [Université de Lille, Associate Professor, HDR]
  • Damien Pollet [Université de Lille, Associate Professor]
  • Aurélien Tabard [Université Lyon 1, from Sep 2022, Associate Professor (in delegation)]

Post-Doctoral Fellow

  • Bruno Fruchard [Inria, until Sep 2022]

PhD Students

  • Yuan Chen [Université de Lille & University of Waterloo (Canada)]
  • Johann Gonzalez Avila [Université de Lille & Carleton University (Canada)]
  • Alice Loizeau [Inria]
  • Eva Mackamul [Inria]
  • Raphaël Perraud [Inria, from Nov 2022]
  • Grégoire Richard [Université de Lille]
  • Philippe Schmid [Inria]
  • Travis West [Université de Lille & McGill University (Canada)]

Technical Staff

  • Axel Antoine [Inria, Engineer]
  • Danny Kieken [Université de Lille, Engineer, from Oct 2022]
  • Rahul Kumar Ray [Inria, Engineer, until Nov 2022]

Interns and Apprentices

  • Baptiste Kieffer [Inria, Intern, from Jun 2022 until Aug 2022]
  • Danny Kieken [Inria, Intern, from Mar 2022 until Aug 2022]
  • Édouard Kril [Université de Lille, Intern, from May 2022 until Aug 2022]
  • Ludovic Mantovani [Inria, Intern, from Jul 2022 until Sep 2022]
  • Raphaël Perraud [Inria, Intern, from May 2022 until Oct 2022]

Administrative Assistants

  • Lucile Leclercq [Inria, from Nov 2022]
  • Karine Lewandowski [Inria, until Nov 2022]

External Collaborator

  • Edward Lank [University of Waterloo (Canada), until Mar 2022]

2 Overall objectives

Human-Computer Interaction (HCI) is a constantly moving field  35. Changes in computing technologies extend their possible uses, and modify the conditions of existing uses. People also adapt to new technologies and adjust them to their own needs  41. New problems and opportunities thus regularly arise and must be addressed from the perspectives of both the user and the machine, to understand and account for the tight coupling between human factors and interactive technologies. Our vision is to connect these two elements: Knowledge & Technology for Interaction.

2.1 Knowledge for Interaction

In the early 1960s, when computers were scarce, expensive, bulky, and formal-scheduled machines used for automatic computations, Engelbart saw their potential as personal interactive resources. He saw them as tools we would purposefully use to carry out particular tasks and that would empower people by supporting intelligent use  31. Others at the same time were seeing computers differently, as partners, intelligent entities to whom we would delegate tasks. These two visions still constitute the roots of today's predominant HCI paradigms, use and delegation. In the delegation approach, a lot of effort has been made to support oral, written and non-verbal forms of human-computer communication, and to analyze and predict human behavior. But the inconsistency and ambiguity of human beings, and the variety and complexity of contexts, make these tasks very difficult  46 and the machine is thus the center of interest.

2.1.1 Computers as tools

The focus of Loki is not on what machines can understand or do by themselves, but on what people can do with them. We do not reject the delegation paradigm but clearly favor the one of tool use, aiming for systems that support intelligent use rather than for intelligent systems. And as the frontier is getting thinner, one of our goals is to better understand what makes an interactive system perceived as a tool or as a partner, and how the two paradigms can be combined for the best benefit of the user.

2.1.2 Empowering tools

The ability provided by interactive tools to create and control complex transformations in real-time can support intellectual and creative processes in unusual but powerful ways. But mastering powerful tools is not simple and immediate, it requires learning and practice. Our research in HCI should not just focus on novice or highly proficient users, it should also care about intermediate ones willing to devote time and effort to develop new skills, be it for work or leisure.

2.1.3 Transparent tools

Technology is most empowering when it is transparent: invisible in effect, it does not get in your way but lets you focus on the task. Heidegger characterized this unobtruded relation to things with the term zuhanden (ready-to-hand). Transparency of interaction is not best achieved with tools mimicking human capabilities, but with tools taking full advantage of them given the context and task. For instance, the transparency of driving a car “is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road)”  50. Our actions towards the digital world need to be digitized and we must receive proper feedback in return. But input and output technologies pose somewhat inevitable constraints while the number, diversity, and dynamicity of digital objects call for more and more sophisticated perception-action couplings for increasingly complex tasks. We want to study the means currently available for perception and action in the digital world: Do they leverage our perceptual and control skills? Do they support the right level of coupling for transparent use? Can we improve them or design more suitable ones?

2.2 Technology for Interaction

Studying the interactive phenomena described above is one of the pillars of HCI research, in order to understand, model and ultimately improve them. Yet, we have to make those phenomena happen, to make them possible and reproducible, be it for further research or for their diffusion  34. However, because of the high viscosity and the lack of openness of actual systems, this requires considerable efforts in designing, engineering, implementing and hacking hardware and software interactive artifacts. This is what we call “The Iceberg of HCI Research”, of which the hidden part supports the design and study of new artifacts, but also informs their creation process.

2.2.1 “Designeering Interaction”

Both parts of this iceberg are strongly influencing each other: The design of interaction techniques (the visible top) informs on the capabilities and limitations of the platform and the software being used (the hidden bottom), giving insights into what could be done to improve them. On the other hand, new architectures and software tools open the way to new designs, by giving the necessary bricks to build with  36. These bricks define the adjacent possible of interactive technology, the set of what could be designed by assembling the parts in new ways. Exploring ideas that lie outside of the adjacent possible require the necessary technological evolutions to be addressed first. This is a slow and gradual but uncertain process, which helps to explore and fill a number of gaps in our research field but can also lead to deadlocks. We want to better understand and master this process—i. e., analyzing the adjacent possible of HCI technology and methods—and introduce tools to support and extend it. This could help to make technology better suited to the exploration of the fundamentals of interaction, and to their integration into real systems, a way to ultimately improve interactive systems to be empowering tools.

2.2.2 Computers vs Interactive Systems

In fact, today's interactive systems—e. g., desktop computers, mobile devices— share very similar layered architectures inherited from the first personal computers of the 1970s. This abstraction of resources provides developers with standard components (UI widgets) and high-level input events (mouse and keyboard) that obviously ease the development of common user interfaces for predictable and well-defined tasks and users' behaviors. But it does not favor the implementation of non-standard interaction techniques that could be better adapted to more particular contexts, to expressive and creative uses. Those often require to go deeper into the system layers, and to hack them until getting access to the required functionalities and/or data, which implies switching between programming paradigms and/or languages.

And these limitations are even more pervading as interactive systems have changed deeply in the last 20 years. They are no longer limited to a simple desktop or laptop computer with a display, a keyboard and a mouse. They are becoming more and more distributed and pervasive (e. g., mobile devices, Internet of Things). They are changing dynamically with recombinations of hardware and software (e. g., transition between multiple devices, modular interactive platforms for collaborative use). Systems are moving “out of the box” with Augmented Reality, and users are going “ inside of the box” with Virtual Reality. This is obviously raising new challenges in terms of human factors, usability and design, but it also deeply questions actual architectures.

2.2.3 The Interaction Machine

We believe that promoting digital devices to empowering tools requires better fundamental knowledge about interaction phenomena AND to revisit the architecture of interactive systems in order to support this knowledge. By following a comprehensive systems approach—encompassing human factors, hardware elements, and all software layers above—we want to define the founding principles of an Interaction Machine:

  • a set of hardware and software requirements with associated specifications for interactive systems to be tailored to interaction by leveraging human skills;
  • one or several implementations to demonstrate and validate the concept and the specifications in multiple contexts;
  • guidelines and tools for designing and implementing interactive systems, based on these specifications and implementations.

To reach this goal, we will adopt an opportunistic and iterative strategy guided by the designeering approach, where the engineering aspect will be fueled by the interaction design and study aspect. We will address several fundamental problems of interaction related to our vision of “empowering tools”, which, in combination with state-of-the-art solutions, will instruct us on the requirements for the solutions to be supported in an interactive system. This consists in reifying the concept of the Interaction Machine into multiple contexts and for multiple problems, before converging towards a more unified definition of “what is an interactive system”, the ultimate Interaction Machine, which constitutes the main scientific and engineering challenge of our project.

3 Research program

Interaction is by nature a dynamic phenomenon that takes place between interactive systems and their users. Redesigning interactive systems to better account for interaction requires fine understanding of these dynamics from the user side so as to better handle them from the system side. In fact, layers of actual interactive systems abstract hardware and system resources from a system and programing perspective. Following our Interaction Machine concept, we are reconsidering these architectures from the user's perspective, through different levels of dynamics of interaction (see Figure 1).

Figure 1

Represents the 3 levels of dynamics of interaction that we consider in our research program.

Figure 1: Levels of dynamics of interaction

Considering phenomena that occur at each of these levels as well as their relationships will help us to acquire the necessary knowledge (Empowering Tools) and technological bricks (Interaction Machine) to reconcile the way interactive systems are designed and engineered with human abilities. Although our strategy is to investigate issues and address challenges for all of the three levels, our immediate priority is to focus on micro-dynamics since it concerns very fundamental knowledge about interaction and relates to very low-level parts of interactive systems, which is likely to influence our future research and developments at the other levels.

3.1 Micro-Dynamics

Micro-dynamics involve low-level phenomena and human abilities which are related to short time/instantness and to perception-action coupling in interaction, when the user has almost no control or consciousness of the action once it has been started. From a system perspective, it has implications mostly on input and output (I/O) management.

3.1.1 Transfer functions design and latency management

We have developed a recognized expertise in the characterization and the design of transfer functions  30, 45, i. e., the algorithmic transformations of raw user input for system use. Ideally, transfer functions should match the interaction context. Yet the question of how to maximize one or more criteria in a given context remains an open one, and on-demand adaptation is difficult because transfer functions are usually implemented at the lowest possible level to avoid latency. Latency has indeed long been known as a determinant of human performance in interactive systems  40 and recently regained attention with touch interactions  37. These two problems require cross examination to improve performance with interactive systems: Latency can be a confounding factor when evaluating the effectiveness of transfer functions, and transfer functions can also include algorithms to compensate for latency.

We have proposed new cheap but robust methods for input filtering 3 and for the measurement of end-to-end latency  29 and worked on compensation methods  44 and the evaluation of their perceived side effects 9. Our goal is then to automatically adapt transfer functions to individual users and contexts of use, which we started in  39, while reducing latency in order to support stable and appropriate control. To achieve this, we will investigate combinations of low-level (embedded) and high-level (application) ways to take user capabilities and task characteristics into account and reduce or compensate for latency in different contexts, e. g., using a mouse or a touchpad, a touch-screen, an optical finger navigation device or a brain-computer interface. From an engineering perspective, this knowledge on low-level human factors will help us to rethink and redesign the I/O loop of interactive systems in order to better account for them and achieve more adapted and adaptable perception-action coupling.

3.1.2 Tactile feedback & haptic perception

We are also concerned with the physicality of human-computer interaction, with a focus on haptic perception and related technologies. For instance, when interacting with virtual objects such as software buttons on a touch surface, the user cannot feel the click sensation as with physical buttons. The tight coupling between how we perceive and how we manipulate objects is then essentially broken although this is instrumental for efficient direct manipulation. We have addressed this issue in multiple contexts by designing, implementing and evaluating novel applications of tactile feedback 5.

In comparison with many other modalities, one difficulty with tactile feedback is its diversity. It groups sensations of forces, vibrations, friction, or deformation. Although this is a richness, it also raises usability and technological challenges since each kind of haptic stimulation requires different kinds of actuators with their own parameters and thresholds. And results from one are hardly applicable to others. On a “knowledge” point of view, we want to better understand and empirically classify haptic variables and the kind of information they can represent (continuous, ordinal, nominal), their resolution, and their applicability to various contexts. From the “technology” perspective, we want to develop tools to inform and ease the design of haptic interactions taking best advantage of the different technologies in a consistent and transparent way.

3.2 Meso-Dynamics

Meso-dynamics relate to phenomena that arise during interaction, on a longer but still short time-scale. For users, it is related to performing intentional actions, to goal planning and tools selection, and to forming sequences of interactions based on a known set of rules or instructions. From the system perspective, it relates to how possible actions are exposed to the user and how they have to be executed (i. e., interaction techniques). It also has implication on the tools for designing and implementing those techniques (programming languages and APIs).

3.2.1 Interaction bandwidth and vocabulary

Interactive systems and their applications have an always-increasing number of available features and commands due to, e. g., the large amount of data to manipulate, increasing power and number of functionalities, or multiple contexts of use.

On the input side, we want to augment the interaction bandwidth between the user and the system in order to cope with this increasing complexity. In fact, most input devices capture only a few of the movements and actions the human body is capable of. Our arms and hands for instance have many degrees of freedom that are not fully exploited in common interfaces. We have recently designed new technologies to improve expressibility such as a bendable digitizer pen  32, or reliable technology for studying the benefits of finger identification on multi-touch interfaces  33.

On the output side, we want to expand users' interaction vocabulary. All of the features and commands of a system can not be displayed on screen at the same time and lots of advanced features are by default hidden to the users (e. g., hotkeys) or buried in deep hierarchies of command-triggering systems (e. g., menus). As a result, users tend to use only a subset of all the tools the system actually offers  43. We will study how to help them to broaden their knowledge of available functions.

Through this “opportunistic” exploration of alternative and more expressive input methods and interaction techniques, we will particularly focus on the necessary technological requirements to integrate them into interactive systems, in relation with our redesign of the I/O stack at the micro-dynamics level.

3.2.2 Spatial and temporal continuity in interaction

At a higher level, we will investigate how more expressive interaction techniques affect users' strategies when performing sequences of elementary actions and tasks. More generally, we will explore the “continuity” in interaction. Interactive systems have moved from one computer to multiple connected interactive devices (computer, tablets, phones, watches, etc.) that could also be augmented through a Mixed-Reality paradigm. This distribution of interaction raises new challenges, both in terms of usability and engineering, that we clearly have to consider in our main objective of revisiting interactive systems  42. It involves the simultaneous use of multiple devices and also the changes in the role of devices according to the location, the time, the task, and contexts of use: a tablet device can be used as the main device while traveling, and it becomes an input device or a secondary monitor when resuming that same task once in the office; a smart-watch can be used as a standalone device to send messages, but also as a remote controller for a wall-sized display. One challenge is then to design interaction techniques that support smooth, seamless transitions during these spatial and temporal changes in order to maintain the continuity of uses and tasks, and how to integrate these principles in future interactive systems.

3.2.3 Expressive tools for prototyping, studying, and programming interaction

Current systems suffer from engineering issues that keep constraining and influencing how interaction is thought, designed, and implemented. Addressing the challenges we presented in this section and making the solutions possible require extended expressiveness, and researchers and designers must either wait for the proper toolkits to appear, or “hack” existing interaction frameworks, often bypassing existing mechanisms. For instance, numerous usability problems in existing interfaces stem from a common cause: the lack, or untimely discarding, of relevant information about how events are propagated and how changes come to occur in interactive environments. On top of our redesign of the I/O loop of interactive systems, we will investigate how to facilitate access to that information and also promote a more grounded and expressive way to describe and exploit input-to-output chains of events at every system level. We want to provide finer granularity and better-described connections between the causes of changes (e.g. input events and system triggers), their context (e.g. system and application states), their consequences (e.g. interface and data updates), and their timing 8. More generally, a central theme of our Interaction Machine vision is to promote interaction as a first-class object of the system  28, and we will study alternative and better-adapted technologies for designing and programming interaction, such as we did recently to ease the prototyping of Digital Musical Instruments 2 or the programming of graphical user interfaces 10. Ultimately, we want to propose a unified model of hardware and software scaffolding for interaction that will contribute to the design of our Interaction Machine.

3.3 Macro-Dynamics

Macro-dynamics involve longer-term phenomena such as skills acquisition, learning of functionalities of the system, reflexive analysis of its own use (e. g., when the user has to face novel or unexpected situations which require high-level of knowledge of the system and its functioning). From the system perspective, it implies to better support cross-application and cross-platform mechanisms so as to favor skill transfer. It also requires to improve the instrumentation and high-level logging capabilities to favor reflexive use, as well as flexibility and adaptability for users to be able to finely tune and shape their tools.

We want to move away from the usual binary distinction between “novices” and “experts” 4 and explore means to promote and assist digital skill acquisition in a more progressive fashion. Indeed, users have a permanent need to adapt their skills to the constant and rapid evolution of the tasks and activities they carry on a computer system, but also the changes in the software tools they use  48. Software strikingly lacks powerful means of acquiring and developing these skills 4, forcing users to mostly rely on outside support (e. g., being guided by a knowledgeable person, following online tutorials of varying quality). As a result, users tend to rely on a surprisingly limited interaction vocabulary, or make-do with sub-optimal routines and tools  49. Ultimately, the user should be able to master the interactive system to form durable and stabilized practices that would eventually become automatic and reduce the mental and physical efforts , making their interaction transparent.

In our previous work, we identified the fundamental factors influencing expertise development in graphical user interfaces, and created a conceptual framework that characterizes users' performance improvement with UIs 4, 7. We designed and evaluated new command selection and learning methods to leverage user's digital skill development with user interfaces, on both desktop and touch-based computers 6.

We are now interested in broader means to support the analytic use of computing tools:

  • to foster understanding of interactive systems. As the digital world makes the shift to more and more complex systems driven by machine learning algorithms, we increasingly lose our comprehension of which process caused the system to respond in one way rather than another. We will study how novel interactive visualizations can help reveal and expose the “intelligence” behind, in ways that people better master their complexity.
  • to foster reflexion on interaction. We will study how we can foster users' reflexion on their own interaction in order to encourage them to acquire novel digital skills. We will build real-time and off-line software for monitoring how user's ongoing activity is conducted at an application and system level. We will develop augmented feedbacks and interactive history visualization tools that will offer contextual visualizations to help users to better understand and share their activity, compare their actions to that of others, and discover possible improvement.
  • to optimize skill-transfer and tool re-appropriation. The rapid evolution of new technologies has drastically increased the frequency at which systems are updated, often requiring to relearn everything from scratch. We will explore how we can minimize the cost of having to appropriate an interactive tool by helping users to capitalize on their existing skills.

We plan to explore these questions as well as the use of such aids in several contexts like web-based, mobile, or BCI-based applications. Although, a core aspect of this work will be to design systems and interaction techniques that will be as little platform-specific as possible, in order to better support skill transfer. Following our Interaction Machine vision, this will lead us to rethink how interactive systems have to be engineered so that they can offer better instrumentation, higher adaptability, and fewer separation between applications and tasks in order to support reuse and skill transfer.

4 Application domains

Loki works on fundamental and technological aspects of Human-Computer Interaction that can be applied to diverse application domains.

Our 2022 research involved desktop and mobile interaction, gestural interaction, virtual and extended reality, 3D manipulation techniques, haptics, with notable methodological applications to the design and evaluation of novel interaction techniques. Our technical work contributes to the more general application domains of interactive systems engineering.

5 Social and environmental responsibility

5.1 Footprint of research activities

Since 2022, we have included an estimate of the carbon footprint costs in our provisional travel budget. Although this is not our primary criterion, it at least makes us aware of it and to consider it in our decisions, especially when the events can also be remotely attended.

6 Highlights of the year

6.1 Awards

Best paper award from the ACM EICS conference to the paper “What do Researchers Need when Implementing Novel Interaction Techniques?”, from T. Raffaillac & S. Huot19.

Honorable mention award from the ACM SUI conference to the paper “MicroPress: Detecting Pressure and Hover Distance in Thumb-to-Finger Interactions”, from R. Dobinson, M. Teyssier, J. Steimle & B. Fruchard17.

Axel Antoine received the “Prix de Thèse en IHM” awarded by AFIHM during IHM'22 for his Ph.D. dissertation “Études des stratégies et conception d'outils pour la production de supports illustratifs d'interaction”.

7 New software and platforms

7.1 New software

7.1.1 BoxingCadence

  • Name:
    Annotation Tool to Identify Hits in Boxing Videos
  • Keywords:
    Human Computer Interaction, Video annotation, JavaScript, Annotation tool, Sport
  • Functional Description:
    The video time can be controlled through the mouse or keyboard with a frame granularity. Annotations for each athlete can be associated to the current frame by pressing a keyboard key. The tool visualizes all annotations on a timeline below the video, and clicking on one of them jumps to the associated frame in the video.
  • Author:
    Bruno Fruchard
  • Contact:
    Bruno Fruchard

7.1.2 ClimbingAnnotation

  • Name:
    Video annotation tool for sport performances
  • Keywords:
    Human Computer Interaction, JavaScript, Sport, Annotation tool, Video annotation
  • Functional Description:
    The tool enables to view a video and to add frame-precise annotations to specify the start and end of significant actions. It was designed originally to study lead climbing videos and enables to annotate actions such as grasping or releasing a hold, the athlete's energy consumption, or athletes' and trainers' comments. The annotations can be entered through a sequencer buttons or using keyboard shortcuts. As soon as annotations are entered, the tool aggregate them automatically using plots to visualize them and facilitate the interpretation of a performance results. Plots depict, for instance, the average holding time per hand or the evolution of the score over a time interval.
  • Author:
    Bruno Fruchard
  • Contact:
    Bruno Fruchard

7.1.3 Esquisse

  • Keyword:
    Vector graphics
  • Functional Description:

    Esquisse is a software tool designed to facilitate the production of vector-based illustrative figures of interactive scenarios. In that respect, it relies on a 3D scene where the user imports the necessary elements (interactive devices and characters that interact with these devices), stage the scene by modifying the position and posture of these elements, adjust the virtual camera of the scene, and finally export the view from that camera as a vector-based trace figure (static illustrative figures created to capture the essence of a situation, removing unnecessary details by limiting the graphical representation to the most important contours/outlines of the shown objects and people).

    Esquisse was built as a web application implemented with React, Typescript, WebAssembly and three.js. The vector-based rendering of the 3D scene is produced thanks to our own implementation of state-of-the-art non-photorealistic rendering algorithms, adapted to the specific needs of Esquisse.

  • URL:
  • Author:
    Axel Antoine
  • Contact:
    Sylvain Malacria

7.1.4 fast-triangle-triangle-intersection

  • Keywords:
    Geometric computing, Triangle-triangle intersection
  • Functional Description:

    In order to detect possible intersections between 3D meshes, it is necessary to detect possible intersections between the triangles that make up this or these meshes. This software allows to compute these intersections based on orientation calculations of the triangles, and to recover the geometrical shapes (point, segment, polygon) corresponding to the possible intersections.

    This tool is implemented in Typescript. It is based on the algorithm described in the article Faster Triangle-Triangle Intersection Tests by Devillers and Guigue, and extends it to handle situations of coplanar triangle intersections that were not handled by the original algorithm.

  • URL:
  • Author:
    Axel Antoine
  • Contact:
    Sylvain Malacria

7.1.5 arrangement-2d-js

7.1.6 three-mesh-halfedge

  • Keyword:
    Three.js
  • Functional Description:
    This software implements in Typescript and for threeJS geometries the solution described by Kalle Rutanen on his post about half-edge structure. It can be used to navigate through edges and vertices of a 3D mesh, regardless of whether this mesh is one or two-manifold (that is, that the mesh can be split along its various edges and subsequently unfolded so that the mesh lays flat without overlapping pieces). These structures are used as base structure for the nonphotorealistic SVG rendering algorithm used in the Esquisse software. This implementation typically support several contexts that are not handheld by other implementations, for instance if the mesh has isolated polygons, vertices or edges, or if there are multiple edges between the same vertices.
  • URL:
  • Author:
    Axel Antoine
  • Contact:
    Sylvain Malacria

7.1.7 three-svg-renderer

  • Name:
    SVG nonphotorealistic rendering algorith in three.js
  • Keywords:
    Three.js, SVG
  • Functional Description:

    Standalone implementation of a nonphotorealistic rendering algorithm that can be used to render a 3D scene as a SVG vector graphic file. This algorithm is used in the software tool Esquisse. It supports 3D scene with intersection between 3D objects to some extent, and can render visible contour, invisible contour and hidden contours. It also generates lines to emphasize creasings over a certain angle. It also fill each region of a 3D object with a single solid color that corresponds to the color of the corresponding region of this 3D object, ignoring its texture. Some rendering problems might be observed when 3D objects intersect.

    In order to achieve this result, the renderer analyzes the geometry of all objects in the scene, builds a viewmap of the mesh edges and computes the visibility of each contour in the scene. It then produces a SVG file based on all these information. This algorithm was implemented in three.js with Typescript.

  • URL:
  • Author:
    Axel Antoine
  • Contact:
    Sylvain Malacria

8 New results

According to our research program, we have studied dynamics of interaction along three levels depending on interaction time scale and related user's perception and behavior: Micro-dynamics, Meso-dynamics, and Macro-dynamics. Considering phenomena that occur at each of these levels as well as their relationships will help us acquire the necessary knowledge (Empowering Tools) and technological bricks (Interaction Machine) to reconcile the way interactive systems are designed and engineered with human abilities. Our strategy is to investigate issues and address challenges at all three levels of dynamics in order to contribute to our longer term objective of defining the basic principles of an Interaction Machine.

This year, we also introduce a “Methodology” section to report on one of our contributions which is transverse to the axes of our research program.

8.1 Micro-dynamics

Participants: Bruno Fruchard, Géry Casiez [contact person], Alice Loizeau, Sylvain Malacria, Mathieu Nancel, Thomas Pietrzak, Philippe Schmid.

8.1.1 Studying the timescale of perceptual-motor (re)calibration following a change in visual display gain

Experiencing a non-1:1 mapping between perception and action in everyday life is not common. It could be considered as a problem for our perceptual-motor system because of the need to adapt our goal-directed movement to different gains between movement and task spaces. However, this is a common situation when interaction with a computer using a mouse, requiring to adapt our movement to different Control Display gains when switching from one operating system to another. We conducted a study to characterize the perceptual-motor calibration process following a sudden change in control display gain 12. Sixteen participants manipulated a computer mouse to move a cursor on screen. The discrete aiming task consisted on reaching the target from a starting target position as fast and as accurately as possible. Our methodology consisted in suddenly manipulating the gain between both spaces following a three-step adaptation methodology (baseline condition followed by a perturbation and return to baseline condition). Results demonstrated that not only participants produce adaptive behavior following several types of perturbations, but they were also able to do it at a very short timescale.

8.1.2 Endpoint prediction in pointing tasks

We proposed a new simplified pointing model as a feedback-based dynamical system 13. This model takes into account the commutation between the correction and ballistic phases in pointing tasks (see Figure 2). We use the mouse position to estimate the model parameters online and predict the endpoint of the pointer trajectory. Our model allows the use of linear regression techniques to estimate its parameters. In particular, we compared our prediction algorithm with “kinematic endpoint prediction (KEP)”  38, which is the most known approach in the group of algorithms without memory. Our results suggest that the switched algorithm outperforms KEP, especially at the early phase (trajectory path 85%) and converges to the almost exact value at the end due to the separated correction phase estimation algorithm.

Figure 2

Represents the switched model diagram to takes into account the commutation between the correction and ballistic phases in pointing tasks.

Figure 2: Switched model diagram.

8.1.3 Studying the influence of the size of a virtual trackball on 3D rotations

Rotating 3D objects on desktop computers with a mouse or a trackpad is a notoriously difficult task, especially for novice users. Techniques relying on a “virtual trackball” have been proposed in the literature and these continue to be used in most 3D software (see Figure 3). While several studies were conducted to compare the performance of these techniques, none was focused on the intrinsic parameter of the radius of the virtual control sphere of the trackball. In a controlled study, we investigated the influence of the radius of the control sphere on the performance and behavior of users in a 3D docking task 15. Surprisingly, the results do not suggest a significant effect of the size of the virtual control sphere on user performance. However, an analysis of user behavior suggests that it influences the user's strategy for how they interact with the virtual trackball.

Figure 3

Represents the orthographic projection on a virtual sphere of mouse displacement in a plan for computing a 3D rotation angle.

Figure 3: The rotation angle α is computed from the mouse displacement between PA and PB, corresponding to the orthographic projections of PA' and PB' on the virtual sphere.

8.1.4 Detecting pressure and hover distance for thumb-to-finger interaction

Thumb-to-finger interactions leverage the thumb for precise, eyes-free input with high sensory bandwidth. Previous research focused on touch-based gestures leveraging finger movements on the skin, and eluded other input means such as pressure and hovering. Through a proof-of-concept, we demonstrated one can estimate the pressure applied on the skin and the distance between the thumb and the index finger 17. The system builds on a magnet, a wearable IMU sensor array, and a bi-directional RNN deep learning approach to enable fine-grained control while preserving the natural tactile feedback of the skin (see Figure 4). Preliminary results indicate that with short per-user calibration steps, the system is capable of predicting hover distance with 0.57mm accuracy, and on-skin pressure with 6.71% normalized pressure error at 6 locations on the index finger.

Figure 4

Represents the hardware setup for pressure and hover distance estimation between fingers.

Figure 4: Magnet and IMU array system used to estimate pressure levels and hover distances between the thumb and the index finger using a BiRNN deep learning approach.

8.2 Meso-dynamics

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot, Edward Lank, Alice Loizeau, Sylvain Malacria, Mathieu Nancel, Thomas Pietrzak [contact person], Damien Pollet, Marcelo Wanderley, Travis West.

8.2.1 Studying the Design of Visual Feedback for Representing Contacts in Extended Reality

In absence of haptic feedback, the perception of contact with virtual objects can rapidly become a problem in extended reality (XR) applications. XR developers often rely on visual feedback to inform the user and display contact information. However, as for today, there is no clear path on how to design and assess such visual techniques. We proposed a design space for the creation of visual feedback techniques meant to represent contact with virtual surfaces in XR 16. Based on this design space, we conceived a set of various visual techniques, including novel approaches based on onomatopoeia and inspired by cartoons, or visual effects based on physical phenomena (see Figure 5). Then, we conducted an online preliminary user study with 60 participants, consisting in assessing 6 visual feedback techniques in terms of user experience. We could notably assess, for the first time, the potential influence of the interaction context by comparing the participants’ answers in two different scenarios: industrial versus entertainment conditions. Taken together, our design space and initial results could inspire XR developers for a wide range of applications in which the augmentation of contact seems prominent, such as for vocational training, industrial assembly/maintenance, surgical simulation or videogames.

Figure 5

Represents the set of visual feedback to represent a contact in extended reality.

Figure 5: Our set of visual feedback techniques meant to represent contact in extended reality. These techniques were conceived using the design space presented in this paper and implemented in a Microsoft HoloLens 2 (left). The techniques are the following: A) Kapow, B) Lightning, C) Color Change, D) Arrow, E) Disk, F) Deformation, G) Spark3D, H) Hole, I) Ripple, J) Crack, K) Poof, L) Shaking, M) Bubble3D, and N) Snowflakes.

8.2.2 Designing Visual Feedback Safety Techniques When Interacting With Encountered-Type Haptic Displays

Encountered-Type Haptic Displays (ETHDs) enable users to touch virtual surfaces by using robotic actuators capable of co-locating real and virtual surfaces without encumbering users with actuators. One of the main challenges of ETHDs is to ensure that the robotic actuators do not interfere with the VR experience by avoiding unexpected collisions with users. We presented a design space for safety techniques using visual feedback to make users aware of the robot's state and thus reduce unintended potential collisions 14. The blocks that compose this design space focus on what and when the feedback is displayed and how it protects the user. Using this design space, a set of 18 techniques was developed exploring variations of the three dimensions. An evaluation questionnaire focusing on immersion and perceived safety was designed and evaluated by a group of experts, which was used to provide a first assessment of the proposed techniques.

8.2.3 Towards a unified command selection mechanism for touch-based devices

Hotkeys (or keyboard shortcuts) are efficient command selection mechanism commonly deployed on desktop systems. They facilitate rapid access to specific commands by pressing a modifier key together with another character key. Unlike desktop systems, touch-based devices usually rely on menus and gestures for command selection. On existing smartphones and tablets, commands like finding words require multiple taps, and essential text-editing commands, like undo, are either not supported or only accessible via “physical” gestures like shaking the device. Other commands, like find, can be activated using different interaction paradigms depending on the application. We advocate for the usage of hotkeys on touch-based devices. This concept of soft keyboard shortcuts/hotkeys, SoftCuts (see Figure 6), can already be scarcely found on commercial products, but rely on inconsistent selection mechanisms and visual representation of shortcuts, and little is known regarding their performance and usability. We therefore explored Softcuts in four studies. First, we evaluated visual designs and recommended icons with command names for novices while letters with command names for experts. Second, we investigated the discoverability by asking crowdworkers to use our prototype, with some tasks only doable upon successfully discovering the technique. Discovery rates were high regardless of conditions that vary the familiarity and saliency of modifier keys. However, familiarity with desktop hotkeys boosted discoverability. Our third study focused on how prior knowledge of hotkeys could be leveraged and resulted in a 5% selection time improvement and identified the role of spatial memory in retention. Finally, we compared our soft keyboard layout with a grid layout similar to FastTap. The latter offered a 12-16% gain on selection speed, but at a high cost in terms of screen estate and low spatial stability.

Figure 6

Represents an example of softcuts on a virtual keyboard, with a realistic command set.

Figure 6: Example of Softcuts with a realistic command set, taken from the Windows version of Microsoft Word. It provides access to commands like Add Link, Bold, Copy, Paste, etc.

8.3 Macro-dynamics

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot, Eva Mackamul, Sylvain Malacria [contact person], Mathieu Nancel, Grégoire Richard, Travis West.

8.3.1 Collaborative design of vibrotactile patterns using an end-to-end design suite

Designing vibrotactile patterns to produce bodily experiences is challenging because of the complex geometry of the body surface. Additionally, communicating these experiences requires a specific vocabulary that might be difficult to interpret. We contributed to the design of an open-source collaborative suite (https://github.com/TactileVision/TactJam) comprising a stand-alone hardware device that enables directly designing vibrotactile patterns on the body using 8 actuators 23, and an interactive application used to share patterns through a central hub and document them with a 3D visualization (see Figure 7). We evaluated this suite through two workshops: the first focused on designing the patterns without the devices, and the second on implementing these patterns. Our analysis demonstrated that designing patterns is strongly influenced by the ability to feel the actuators while producing them: less implicit assumptions were made, and designs were guided by personal experience.

Figure 7

Represents the hardware and software suite for designing vibrotactile patterns.

Figure 7: TactJam is an end-to-end suite for creating and sharing low fidelity prototypes of on-body vibrotactile feedback. It is fully open-source and comprises a stand-alone hardware device that enables controlling 8 actuators one can place their body, and an interactive application communicating with a server to upload and download vibrotactile patterns.

8.4 Interaction Machine

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot [contact person], Sylvain Malacria, Mathieu Nancel, Thomas Pietrzak, Damien Pollet, Philippe Schmid.

Our transversal “Interaction Machine” research axis was again informed by our contributions on the design and understanding of interaction phenomenon, mainly at the micro and meso dynamics levels. Moreover, this year, we also have two results specific to this axis: one on the understanding of the needs of HCI researchers for the prototyping and implementation of novel and non-standard interaction techniques; the other on a programming tool to reconcile the application and hardware programming levels. Finally, in 2023, a dedicated engineer will join Loki for at least one year. His mission will be to study and realize the integration of our contributions in a unique and generic framework. Beyond contributing to the emergence of our Interaction Machine concept, this will also be an opportunity to raise new related research questions.

8.4.1 Contributions from other research axes

Contributions at the micro-dynamics level give insights into low-level design of interactive systems. In particular our study on the timescale of perceptual-motor (re)calibration following a change in visual display gain 12, our simplified pointing model for endpoint prediction 13, and our method for detecting pressure and hover distance for thumb-to-finger interaction 17 would have strong implication on how input is managed in interactive systems at both low and high levels. Actual systems, which architectures and APIs are still driven by the needs for implementing standard interfaces and interaction methods, hardly account for such situations that tend to become common in modern interactive systems (distributed environments and applications, multiple and advanced devices, etc.). Introducing this new knowledge and input methods will thus require to rethink the whole input management stack, from the devices to the application: how to account for the variability of users' behaviors and timescales to adapt to visual display gain and how to limit these changes? How to introduce efficient but configurable and adaptable endpoint prediction algorithms to compensate for latency in complex systems (e.g. distributed, VR, etc.)? How to design an input stack that is generic and flexible enough to account for future (and not yet specified) input methods? How to integrate all these requirements into a single, robust and efficient model?

At the meso-dynamics level, our work on feedback for XR contacts 16 and Encountered-Type Haptic Displays (ETHDs) 14 contribute to our overall objective of making interaction and its results or consequences reappearing in contexts where it has been somewhat neglected, and its a paradox, in favor of “ease of use” and “transparency”. It definitively impact how future interactive systems should be specified and implemented, in particular by promoting “interaction” as a first-order object that could be manipulated, visualized, adapted just like some other components of the system (e.g. display, network interfaces, files). It raises both theoretical and epistemological questions (“What is interaction?”, to make it simple...), as well as technical issues (interaction dedicated architectures, programming languages, and APIs). We will build on our previous work on this topic  47 as well as on an emerging collaboration that we are currently starting with the Interactive Informatics Team at ENAC in order to address these questions that are instrumental for our Interaction Machine project.

8.4.2 What do researchers need when implementing novel interaction techniques?

Application and Interaction frameworks (e.g. Qt, JavaFX, React, Android SDK, Unity) are the tools of choice for researchers and UI designers when prototyping new and original interaction techniques. But with little knowledge about actual needs, these frameworks provide incomplete support that restricts, slows down or even prevents the exploration of new ideas. In this context, researchers resort to hacking methods, creating code that lacks robustness beyond experiments, combining libraries of different levels and paradigms, and eventually limiting the dissemination and reproducibility of their work. To better understand this problem, we interviewed 9 HCI researchers and conducted an online survey 19. We have collected a total of 32 responses from the HCI research community, over a 2-month period. From the results we have identified some relevant criteria for choosing frameworks (e.g. ease of use, API quality, documentation vs functionalities), the problems often met with them (e.g. incomplete documentation and unpredictability), and the “tricks” used as solutions (e.g. custom re-implementation of features, access raw input data, reverse-engineering). We have then proposed three design principles to better support prototyping for research in UI frameworks:

  1. duplicate singular elements (e.g. mouse, caret) to foster opportunities for extensions;
  2. accumulate rather than replace to keep a history of changes;
  3. defer the execution of predefined behaviors to enable their monitoring and replacement.

Ultimately, these principles could also transfer from the framework/API level to the system level in order to better account for interaction in an “Interaction Machine”.

8.4.3 Toolchain for an embedded authoring and rendering of audio and force-feedback

ForceHost 18 is a toolchain for generating firmware that hosts authoring and rendering of force-feedback and audio signals and that communicates through I2C with guest motor and sensor boards. With ForceHost, the stability of audio and haptic loops is no longer delegated to and dependent on operating systems and drivers, and devices remain discoverable beyond planned obsolescence. We modified Faust (https://faust.grame.fr/), a high-level language and compiler for real-time audio digital signal processing, to support haptics. Our toolchain compiles audio-haptic firmware applications with Faust and embeds web-based UIs exposing their parameters. We validated our toolchain by example applications and modifications of integrated development environments: script-based programming examples of haptic firmware applications with our haptic1D Faust library, visual programming by mapping input and output signals between audio and haptic devices in Webmapper (https://github.com/libmapper/webmapper), visual programming with physically-inspired mass-interaction models in Synth-a-Modeler Designer. The main contribution is to facilitate the design and prototyping of interactive system that leverage the sensorimotor loop by designing independent building blocks that can be connected together. The embedded authoring tool enables the iterative design of both the audio and force-feedback taking into account both the input and output capabilities of the device.

This approach, making high-level programming closer to low-level hardware while preserving its modularity and flexibility, is also a step on how a system dedicated to interaction could better handle and ease the assembly of various interaction hardware.

8.5 Methodology

Participants: Géry Casiez, Stéphane Huot [contact person], Alice Loizeau, Mathieu Nancel, Thomas Pietrzak, Grégoire Richard.

This section reports on another (new) transverse axis of our research program which concerns methodological questions in our field. HCI being a relatively new and highly multidisciplinary field, it is quite common that we have to question, revise, adapt and even reinvent our design and validation methods which in turn could lead to valuable methodological contributions as it was the case this year.

8.5.1 Comparing Experimental Designs for Virtual Embodiment Studies

When designing virtual embodiment studies, one of the key choices is the nature of the experimental factors, either between-subjects or within-subjects. However, it is well known that each design has advantages and disadvantages in terms of statistical power, sample size requirements and confounding factors. We reported a within-subjects experiment with 92 participants comparing self-reported embodiment scores under a visuomotor task with two conditions: synchronous motions and asynchronous motions with a latency of 300 ms 20. With the gathered data, using a Monte-Carlo method, we created numerous simulations of within- and between-subjects experiments by selecting subsets of the data. In particular, we explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. For the between-subjects simulations, only the first condition for each user was considered to create the simulations. The results showed that while the replicability of the results increased as the number of participants increased for the within-subjects simulations, no matter the number of participants, between-subjects simulations were not able to replicate the initial results (see Figure 8). Our main explanation is that participants in virtual embodiment studies answer this kind of questionnaires in a relative way. Therefore, they need two conditions to provide two different virtual embodiment assesments. We propose several solutions to mitigate this problem – such as providing participants with training and assistance, or designing specific questionnaires – as well as discussion about their limitations and downsides.

Figure 8

Represents the mean effect size and consistency over the total number of participants in the within and between-subjects virtual experiments.

Figure 8: Mean effect size and consistency over the total number of participants in the within and between-subjects virtual experiments. The shaded regions for the effect size correspond to the 5 th and 95 th percentile of the values obtained with the simulations. "2nd cond" refers to the results when considering only the second condition of each participant, similar to the way we created the between-subjects design with the first condition.

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Participation in other International Programs

Réapp

Participants: Géry Casiez, Edward Lank, Sylvain Malacria, Yuan Chen.

  • Title:
    Reappearing Interfaces in Ubiquitous Environments
  • Partner Institution(s):
    Université de Lille, France and University of Waterloo, Canada
  • Date/Duration:
    2019 - 2023
  • Additionnal info/keywords:

    The LAI Réapp is a Université de Lille - International Associated Laboratory between Loki and Cheriton School of Computer Science from the University of Waterloo in Canada. It is funded by the University of Lille to ease shared student supervision and regular inter-group contacts (with Edward Lank, Daniel Vogel & Keiko Katsuragawa at University of Waterloo). The partners universities also co-funded the co-tutelle Ph.D. thesis of Yuan Chen.

    We are at the dawn of the next computing paradigm where everything will be able to sense human input and augment its appearance with digital information without using screens, smartphones, or special glasses—making user interfaces simply disappear. This introduces many problems for users, including the discoverability of commands and use of diverse interaction techniques, the acquisition of expertise, and the balancing of trade-offs between inferential (AI) and explicit (user-driven) interactions in aware environments. We argue that interfaces must reappear in an appropriate way to make ubiquitous environments useful and usable. This project tackles these problems, addressing (1) the study of human factors related to ubiquitous and augmented reality environments, and the development of new interaction techniques helping to make interfaces reappear; (2) the improvement of transition between novice and expert use and optimization of skill transfer; and, last, (3) the question of delegation in smart interfaces, and how to adapt the trade-off between implicit and explicit interaction.

9.2 International research visitors

9.2.1 Visits of international scientists

Inria International Chair
Marcelo M. Wanderley

Professor, Schulich School of Music/IDMIL, McGill University (Canada)

Title: Expert interaction with devices for musical expression (2017 - 2022)

Participants: Stéphane Huot, Thomas Pietrzak.

The main topic of this project is the expert interaction with devices for musical expression and consists of two main directions: the design of digital musical instruments (DMIs) and the evaluation of interactions with such instruments. It will benefit from the unique, complementary expertise available at the Loki Team, including the design and evaluation of interactive systems, the definition and implementation of software tools to track modifications, visualize and haptically display data, as well as the study of expertise development within human-computer interaction contexts. The project’s main goal is to bring together advanced research on devices for musical expression (IDMIL – McGill) and cutting-edge research in Human-computer interaction (Loki Team).

Joint publications in 2022: 18

Edward Lank

Professor at Cheriton School of Computer Science, University of Waterloo (Canada)

Title: Rich, Reliable Interaction in Ubiquitous Environments (2019 - 2022)

Participants: Géry Casiez, Sylvain Malacria, Mathieu Nancel, Yuan Chen.

The objectives of the research program are:

  1. Designing Rich Interactions for Ubiquitous and Augmented Reality Environments
  2. Designing Mechanisms and Metaphors for Novices, Experts, and the Novice to Expert Transition
  3. Integrating Intelligence with Human Action in Richly Augmented Environments.

9.3 Informal International Partners

  • Scott Bateman, University of New Brunswick, Fredericton, CA

    interaction in 3D environments (VR, AR)

  • Audrey Girouard, Carleton University, Ottawa, CA

    flexible input devices, interactions for digital fabrication (co-tutelle thesis of Johann Felipe Gonzalez Avila)

  • Simon Perrault, Singapore University of Technology and Design, Singapore

    study and conception of touch-based interactions 11

  • Daniel Vogel, University of Waterloo, Waterloo, CA

    3D rotation techniques 15, spatially augmented reality (co-tutelle thesis of Yuan Chen) and polymorphic documents (co-supervision of Damien Masson's Ph.D. thesis)

9.4 National initiatives

9.4.1 ANR

Causality (JCJC, 2019-2023)

Integrating Temporality and Causality to the Design of Interactive Systems

Participants: Géry Casiez, Stéphane Huot, Alice Loizeau, Sylvain Malacria, Mathieu Nancel [contact person], Philippe Schmid.

The project addresses a fundamental limitation in the way interfaces and interactions are designed and even thought about today, an issue we call procedural information loss: once a task has been completed by a computer, significant information that was used or produced while processing it is rendered inaccessible regardless of the multiple other purposes it could serve. It hampers the identification and solving of identifiable usability issues, as well as the development of new and beneficial interaction paradigms. We will explore, develop, and promote finer granularity and better-described connections between the causes of those changes, their context, their consequences, and their timing. We will apply it to facilitate the real-time detection, disambiguation, and solving of frequent timing issues related to human reaction time and system latency; to provide broader access to all levels of input data, therefore reducing the need to "hack" existing frameworks to implement novel interactive systems; and to greatly increase the scope and expressiveness of command histories, allowing better error recovery but also extended editing capabilities such as reuse and sharing of previous actions.

Web site: http://loki.lille.inria.fr/causality/

Discovery (JCJC, 2020-2024)

Promoting and improving discoverability in interactive systems

Participants: Géry Casiez, Sylvain Malacria [contact person], Eva Mackamul, Raphaël Perraud.

This project addresses a fundamental limitation in the way interactive systems are usually designed, as in practice they do not tend to foster the discovery of their input methods (operations that can be used to communicate with the system) and corresponding features (commands and functionalities that the system supports). Its objective is to provide generic methods and tools to help the design of discoverable interactive systems: we will define validation procedures that can be used to evaluate the discoverability of user interfaces, design and implement novel UIs that foster input method and feature discovery, and create a design framework of discoverable user interfaces. This project investigates, but is not limited to, the context of touch-based interaction and will also explore two critical timings when the user might trigger a reflective practice on the available inputs and features: while the user is carrying her task (discovery in-action); and after having carried her task by having informed reflection on her past actions (discovery on-action). This dual investigation will reveal more generic and context-independent properties that will be summarized in a comprehensive framework of discoverable interfaces. Our ambition is to trigger a significant change in the way all interactive systems and interaction techniques, existing and new, are thought, designed, and implemented with both performance and discoverability in mind.

Web site: http://ns.inria.fr/discovery

Related publications in 2022: 11

PerfAnalytics (PIA “Sport de très haute performance”, 2020-2023)

In situ performance analysis

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot [contact person], Sylvain Malacria.

The objective of the PerfAnalytics project (Inria, INSEP, Univ. Grenoble Alpes, Univ. Poitiers, Univ. Aix-Marseille, Univ. Eiffel & 5 sports federations) is to study how video analysis, now a standard tool in sport training and practice, can be used to quantify various performance indicators and deliver feedback to coaches and athletes. The project, supported by the boxing, cycling, gymnastics, wrestling, and mountain and climbing federations, aims to provide sports partners with a scientific approach dedicated to video analysis, by coupling existing technical results on the estimation of gestures and figures from video with scientific biomechanical methodologies for advanced gesture objectification (muscular for example).

Partners: the project involves several academic partners (Inria, INSEP, Univ. Grenoble Alpes, Univ. Poitiers, Univ. Aix-Marseille, Univ. Eiffel), as well as elite staff and athletes from different Olympic disciplines (Climbing, BMX Race, Gymnastics, Boxing and Wrestling).

Web site: https://perfanalytics.fr/

MIC (PRC, 2022-2026)

Microgesture Interaction in Context

Participants: Thomas Pietrzak [contact person], Sylvain Malacria.

MIC aims at studying and promoting microgesture-based interaction by putting it in practice in real-life use situations. Microgestures are hand gestures performed on one hand with the same hand. Examples include tap and swipe gestures performed by one finger on another finger. We study interaction techniques based on microgestures or on the combination of microgestures with another modality including haptic feedback as well as mechanisms that support discoverability and learnability of microgestures.

Partners: Univ. Grenoble Alpes, Inria, Univ. Toulouse 2, CNRS, Institut des Jeunes Aveugles, Immersion SA.

Web site: https://mic.imag.fr

9.4.2 Inria Project Labs

AVATAR (2018-2022)

The next generation of our virtual selves in digital worlds

Participants: Marc Baloup, Géry Casiez, Stéphane Huot, Thomas Pietrzak [contact person], Grégoire Richard.

This project aims at delivering the next generation of virtual selves, or avatars, in digital worlds. In particular, we want to push further the limits of perception and interaction through our avatars to obtain avatars that are better embodied and more interactive. Loki's contribution in this project consists in designing novel 3D interaction paradigms for avatar-based interaction and to design new multi-sensory feedbacks to better feel our interactions through our avatars.

Partners: Inria's GRAPHDECO, HYBRID, MIMETIC, MORPHEO & POTIOC teams, Mel Slater (Event Lab, University Barcelona, Spain), Technicolor and Faurecia.

Web site: https://avatar.inria.fr/

Related publication in 2022: 20

9.5 Regional initiatives

Ariane (Start-AIRR région Hauts-de-France, 2020-2022)

Validation of the feasibility and relevance of the use of haptic signals for the transmission of complex information

Participants: Thomas Pietrzak [contact person], Rahul Kumar Ray.

Tactons are abstract, structured tactile messages that can be used to convey information in a non-visual way. Several tactile parameters of vibrations have been explored as a medium for encoding information, such as rhythm, roughness, and spatial location. This has been further extended to several other haptic technologies such as pin arrays, demonstrating the possibility of giving directional cues to help visually impaired children explore simple electrical circuit diagrams and geometric shapes. More recently, we have worked in the group on other haptic technologies, in particular a non-visual display that uses the sense of touch around the wrist. The latter allows for example to create an illusion of vibration moving continuously on the skin.

In this project we will use this new tactile feedback to create Tactons, and use them in consumer applications. We compare the parameters of different tactile animation techniques, and evaluate the ability of people to recognize them. The région Hauts-de-France funding allowed us to hire an engineer for 12 months, who implemented the software needed to design and study appropriate haptic cues.

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees

10.1.2 Scientific events: selection

Chair of conference program committees
  • EICS (ACM): Stéphane Huot (co-chair for Late-Breaking Results)
Member of the conference program committees
  • ISS (ACM): Bruno Fruchard
  • CHI (ACM): Géry Casiez, Mathieu Nancel
  • EICS (ACM): Stéphane Huot
  • IHM: Sylvain Malacria
  • VR (IEEE): Thomas Pietrzak
  • HAID: Thomas Pietrzak
Reviewer
  • CHI (ACM): Yuan Chen, Bruno Fruchard, Sylvain Malacria, Thomas Pietrzak
  • DIS (ACM): Bruno Fruchard, Mathieu Nancel
  • UIST (ACM): Géry Casiez, Bruno Fruchard, Sylvain Malacria, Mathieu Nancel, Thomas Peitrzak
  • Mobile HCI (ACM): Bruno Fruchard, Sylvain Malacria
  • VR (IEEE): Géry Casiez, Yuan Chen, Bruno Fruchard, Grégoire Richard
  • EuroHaptics: Bruno Fruchard
  • SIGGRAPH (ACM): Géry Casiez
  • CSCW (ACM): Mathieu Nancel
  • VRST (ACM): Yuan Chen

10.1.3 Journal

Reviewer - reviewing activities

10.1.4 Invited talks

  • Capacités humaines et systèmes interactifs”, Collège de France Seminar – lecture of Wendy Mackay (chaire annuelle Informatique et Sciences numériques), Paris: Géry Casiez
  • The theory behind the discovery of interactions” - Laboratoire d’InfoRmatique en Image et Systèmes d’information (LIRIS): Eva Mackamul
  • Communicating with and Increasing Interactivity in Research Illustrations” - Laboratoire d’InfoRmatique en Image et Systèmes d’information (LIRIS): Sylvain Malacria
  • Recognition or recall? The case of “expert” features in Graphical User Interfaces” - Laboratoire d'Informatique de Grenoble (LIG): Sylvain Malacria
  • The Design and Production of Interaction Illustrations” - 2nd biennial Franco-Italian young Researcher Meetup in Computer-Human Interaction (AFIRM CHI 2022), Padova (Italie): Sylvain Malacria

10.1.5 Leadership within the scientific community

10.1.6 Scientific expertise

  • Agence Nationale de la Recherche (ANR): Mathieu Nancel (member of the CES33 “Interaction and Robotics” committee), Géry Casiez (reviewer for the JCJC track)

10.1.7 Research administration

For Inria

  • Evaluation Committee: Stéphane Huot (member)
  • PEPR eNSEMBLE (~40M€ national research program): Stéphane Huot (program director for Inria)

For Inria center at the University of Lille

  • Direction Board: Stéphane Huot (Head of Science)
  • “Commission des Utilisateurs des Moyens Informatique” (CUMI): Mathieu Nancel (president, since December)
  • “Commission des Emplois de Recherche” (CER): Stéphane Huot (member), Sylvain Malacria (member)
  • “Commission de Développement Technologique” (CDT): Stéphane Huot (member), Mathieu Nancel (member, until November)
  • “Comité Opérationnel d'Évaluation des Risques Légaux et Éthiques” (COERLE, the Inria Ethics board): Thomas Pietrzak (local correspondent)

For the Université de Lille

  • MADIS Graduate School council: Géry Casiez (member)
  • Computer Science Department commission mixte: Thomas Pietrzak (member)
  • Coordinator for internships at IUT de Lille: Géry Casiez
  • Co-coordinator for internships at Computer Science Deparment: Damien Pollet

For the CRIStAL lab of Université de Lille & CNRS

  • Direction Board: Géry Casiez (Deputy Director)
  • Computer Science PhD recruiting committee: Géry Casiez (member)

Hiring committees

  • Inria's committee for Senior Researcher Positions (DR2): Stéphane Huot (member)
  • Inria's committees for Junior Researcher Positions (CRCN/ISFP) in Lille and Rennes: Stéphane Huot (member)
  • Université de Lille's committee for Assistant Professor Positions in Computer Science (IUT): Géry Casiez (vice-president)
  • Université Paris-Saclay's committee for Professor Positions in Computer Science (Polytech): Géry Casiez (member)

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

  • Master Informatique: Géry Casiez (8h), Mathieu Nancel (8h), Sylvain Malacria (12h), Thomas Pietrzak (20h), Interactions Humain-Machine avancées, M2, Université de Lille
  • Master Informatique: Thomas Pietrzak (54h), Sylvain Malacria (48h), Interaction Humain-Machine, M1, Université de Lille
  • Master Informatique: Thomas Pietrzak (21h), Initiation à l'Innovation et à la Recherche, M1, Université de Lille
  • Licence Informatique: Thomas Pietrzak (42h), Sylvain Malacria (3h), Bruno Fruchard (21.5h) Introduction à l'Interaction Humain-Machine, L3, Université de Lille
  • Licence Informatique: Thomas Pietrzak (18h) Logique, L2, Université de Lille
  • Doctoral course: Géry Casiez (12h), Experimental research and statistical methods for Human-Computer Interaction, Université de Lille
  • BUT Informatique: Géry Casiez (38h), Grégoire Richard (28h), IHM, 1st year, IUT de Lille - Université de Lille
  • BUT Informatique: Grégoire Richard (36h), BDD, 1st year, IUT de Lille - Université de Lille
  • BUT Informatique: Grégoire Richard (89h), Algorithmes et Programmation, 1st year, IUT A de Lille - Université de Lille
  • Cursus ingénieur: Sylvain Malacria (9h), 3DETech, IMT Lille-Douai
  • Licence Informatique: Damien Pollet (18h), Informatique, L1, Université de Lille
  • Licence Informatique: Damien Pollet (24h), Projet, L2, Université de Lille
  • Licence Informatique: Damien Pollet (21h), Bases de la programmation C, L2, Université de Lille
  • Licence Informatique: Damien Pollet (21h), Maîtrise de la programmation C, L2, Université de Lille
  • Licence Informatique: Damien Pollet (18h), Conception orientée objet, L3, Université de Lille
  • Licence Informatique: Damien Pollet (21h), Programmation des systèmes, L3, Université de Lille
  • Licence Informatique: Damien Pollet (18h), Programmation des systèmes: approfondissements, L3, Université de Lille
  • Master Informatique: Damien Pollet (27h), Langages et Modèles Dédiés, M2, Université de Lille

10.2.2 Supervision

  • PhD in progress: Raphaël Perraud, Fostering the discovery of interactions through adapted tutorials, started Nov. 2022, advised by Sylvain Malacria
  • PhD in progress: Alice Loizeau, Understanding and designing around error in interactive systems, started Oct. 2021, advised by Stéphane Huot & Mathieu Nancel
  • PhD in progress: Yuan Chen, Adaptive Interactions on Surfaces with an Augmented Lamp, started Dec. 2020, advised by Géry Casiez, Sylvain Malacria & Edward Lank (co-tutelle with University of Waterloo, Canada)
  • PhD in progress: Eva Mackamul, Towards a Better Discoverability of Interactions in Graphical User Interfaces, started Oct. 2020, advised by Géry Casiez & Sylvain Malacria
  • PhD in progress: Travis West, Examining the Design of Musical Interaction: The Creative Practice and Process, started Oct. 2020, advised by Stéphane Huot & Marcelo Wanderley (co-tutelle with McGill University, Canada)
  • PhD in progress: Johann Felipe González Ávila, Improving 3D design for personal fabrication, started Sep. 2020, advised by Géry Casiez, Thomas Pietrzak & Audrey Girouard (co-tutelle with Carleton University, Canada)
  • PhD in progress: Grégoire Richard, Touching Avatars : Role of Haptic Feedback during Interactions with Avatars in Virtual Reality, started Oct. 2019, advised by Géry Casiez & Thomas Pietrzak
  • PhD in progress: Philippe Schmid, Command History as a Full-fledged Interactive Object, started Oct. 2019, advised by Stéphane Huot & Mathieu Nancel

10.2.3 Juries

  • Catherine Letondal (HDR, École Nationale de l'Aviation Civile/Université de Toulouse): Stéphane Huot, reviewer
  • Thomas Pietrzak (HDR 25, Université de Lille): Stéphane Huot, examiner & sponsor
  • Garreth Barnaby (PhD, University of Bristol): Thomas Pietrzak, reviewer
  • Eugénie Brasier (PhD, Université Paris Saclay): Mathieu Nancel, examiner
  • Benoît Geslain (PhD, Sorbonne Université): Thomas Pietrzak, reviewer
  • Anatolii Khalin (PhD, Université de Lille): Géry Casiez, examiner
  • Flavien Lebrun (PhD, Sorbonne Université): Géry Casiez, reviewer
  • Alice Martin (PhD, École Nationale de l'Aviation Civile/ISAE-SUPAERO): Stéphane Huot, reviewer

10.2.4 PhD mid-term evaluation committees

  • Adrien Chaffangeon Caillet (Université Grenoble Alpes): Mathieu Nancel
  • Nikhita Joshi (University of Waterloo): Géry Casiez
  • Brice Parilusyan (De Vinci Innovation Center): Thomas Pietrzak
  • Thibault Simon (Université de Lille): Géry Casiez
  • Pierrick Uro (Université de Lille): Géry Casiez
  • Nicolas Viot (École Nationale de l'Aviation Civile): Stéphane Huot
  • Mayssa Zaier (Université de Lille): Géry Casiez

10.3 Popularization

10.3.1 Education