EN FR
EN FR
LOKI - 2023

2023Activity reportProject-TeamLOKI

RNSR: 201822657D
  • Research center Inria Centre at the University of Lille
  • In partnership with:Université de Lille
  • Team name: Technology & Knowledge for Interaction
  • In collaboration with:Centre de Recherche en Informatique, Signal et Automatique de Lille
  • Domain:Perception, Cognition and Interaction
  • Theme:Interaction and visualization

Keywords

Computer Science and Digital Science

  • A2.1.3. Object-oriented programming
  • A2.1.12. Dynamic languages
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.3. Haptic interfaces
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.8. 3D User Interfaces
  • A5.1.9. User and perceptual studies
  • A5.2. Data visualization
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.2. Augmented reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.7.2. Music

Other Research Topics and Application Domains

  • B2.8. Sports, performance, motor skills
  • B6.1.1. Software engineering
  • B9.2.1. Music, sound
  • B9.4. Sports
  • B9.5.1. Computer science
  • B9.5.6. Data science
  • B9.6.10. Digital humanities
  • B9.8. Reproducibility

1 Team members, visitors, external collaborators

Research Scientists

  • Stéphane Huot [Team leader, Inria, Senior Researcher, HDR]
  • Bruno Fruchard [Inria, ISFP]
  • Sylvain Malacria [Inria, Researcher, HDR]
  • Mathieu Nancel [Inria, Researcher]

Faculty Members

  • Géry Casiez [Université de Lille, Professor, HDR]
  • Thomas Pietrzak [Université de Lille, Associate Professor, HDR]
  • Damien Pollet [Université de Lille, Associate Professor]
  • Aurélien Tabard [Université Lyon 1, Associate Professor, (in delegation)]

PhD Students

  • Yuan Chen [Université de Lille & University of Waterloo (Canada), from Dec 2023]
  • Johann Gonzalez Avila [Université de Lille & Carleton University (Canada)]
  • Suliac Lavenant [Inria, from Oct 2023]
  • Alice Loizeau [Inria]
  • Eva Mackamul [Inria]
  • Raphael Perraud [Inria]
  • Grégoire Richard [Inria, until Aug 2023]
  • Philippe Schmid [Inria, until Jun 2023]
  • Travis West [Université de Lille & McGill University (Canada)]

Technical Staff

  • Axel Antoine [Inria, Engineer, until Jan 2023]
  • Raphaël James [Inria, Engineer, from Mar 2023]
  • Timo Maszewski [Inria, Engineer, from Apr 2023]

Interns and Apprentices

  • Azammat Charaf Zadah [ENS Paris, Intern, from Jun 2023 until Aug 2023]
  • Suliac Lavenant [Université de Lille, Intern, from Apr 2023 until Sep 2023]
  • Mika Liao [ENS Paris, Intern, from Jun 2023 until Aug 2023]

Administrative Assistant

  • Lucille Leclercq [Inria]

2 Overall objectives

Human-Computer Interaction (HCI) is a constantly moving field  41. Changes in computing technologies extend their possible uses, and modify the conditions of existing uses. People also adapt to new technologies and adjust them to their own needs  46. New problems and opportunities thus regularly arise and must be addressed from the perspectives of both the user and the machine, to understand and account for the tight coupling between human factors and interactive technologies. Our vision is to connect these two elements: Knowledge & Technology for Interaction.

2.1 Knowledge for Interaction

In the early 1960s, when computers were scarce, expensive, bulky, and formal-scheduled machines used for automatic computations, Engelbart saw their potential as personal interactive resources. He saw them as tools we would purposefully use to carry out particular tasks and that would empower people by supporting intelligent use  37. Others at the same time were seeing computers differently, as partners, intelligent entities to whom we would delegate tasks. These two visions still constitute the roots of today's predominant HCI paradigms, use and delegation. In the delegation approach, a lot of effort has been made to support oral, written and non-verbal forms of human-computer communication, and to analyze and predict human behavior. But the inconsistency and ambiguity of human beings, and the variety and complexity of contexts, make these tasks very difficult  51 and the machine is thus the center of interest.

2.1.1 Computers as tools

The focus of Loki is not on what machines can understand or do by themselves, but on what people can do with them. We do not reject the delegation paradigm but clearly favor the one of tool use, aiming for systems that support intelligent use rather than for intelligent systems. And as the frontier is getting thinner, one of our goals is to better understand what makes an interactive system perceived as a tool or as a partner, and how the two paradigms can be combined for the best benefit of the user.

2.1.2 Empowering tools

The ability provided by interactive tools to create and control complex transformations in real-time can support intellectual and creative processes in unusual but powerful ways. But mastering powerful tools is not simple and immediate, it requires learning and practice. Our research in HCI should not just focus on novice or highly proficient users, it should also care about intermediate ones willing to devote time and effort to develop new skills, be it for work or leisure.

2.1.3 Transparent tools

Technology is most empowering when it is transparent: invisible in effect, it does not get in your way but lets you focus on the task. Heidegger characterized this unobtruded relation to things with the term zuhanden (ready-to-hand). Transparency of interaction is not best achieved with tools mimicking human capabilities, but with tools taking full advantage of them given the context and task. For instance, the transparency of driving a car “is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road)”  54. Our actions towards the digital world need to be digitized and we must receive proper feedback in return. But input and output technologies pose somewhat inevitable constraints while the number, diversity, and dynamicity of digital objects call for more and more sophisticated perception-action couplings for increasingly complex tasks. We want to study the means currently available for perception and action in the digital world: Do they leverage our perceptual and control skills? Do they support the right level of coupling for transparent use? Can we improve them or design more suitable ones?

2.2 Technology for Interaction

Studying the interactive phenomena described above is one of the pillars of HCI research, in order to understand, model and ultimately improve them. Yet, we have to make those phenomena happen, to make them possible and reproducible, be it for further research or for their diffusion  40. However, because of the high viscosity and the lack of openness of actual systems, this requires considerable efforts in designing, engineering, implementing and hacking hardware and software interactive artifacts. This is what we call “The Iceberg of HCI Research”, of which the hidden part supports the design and study of new artifacts, but also informs their creation process.

2.2.1 “Designeering Interaction”

Both parts of this iceberg are strongly influencing each other: The design of interaction techniques (the visible top) informs on the capabilities and limitations of the platform and the software being used (the hidden bottom), giving insights into what could be done to improve them. On the other hand, new architectures and software tools open the way to new designs, by giving the necessary bricks to build with  42. These bricks define the adjacent possible of interactive technology, the set of what could be designed by assembling the parts in new ways. Exploring ideas that lie outside of the adjacent possible require the necessary technological evolutions to be addressed first. This is a slow and gradual but uncertain process, which helps to explore and fill a number of gaps in our research field but can also lead to deadlocks. We want to better understand and master this process—i. e., analyzing the adjacent possible of HCI technology and methods—and introduce tools to support and extend it. This could help to make technology better suited to the exploration of the fundamentals of interaction, and to their integration into real systems, a way to ultimately improve interactive systems to be empowering tools.

2.2.2 Computers vs Interactive Systems

In fact, today's interactive systems—e. g., desktop computers, mobile devices— share very similar layered architectures inherited from the first personal computers of the 1970s. This abstraction of resources provides developers with standard components (UI widgets) and high-level input events (mouse and keyboard) that obviously ease the development of common user interfaces for predictable and well-defined tasks and users' behaviors. But it does not favor the implementation of non-standard interaction techniques that could be better adapted to more particular contexts, to expressive and creative uses. Those often require to go deeper into the system layers, and to hack them until getting access to the required functionalities and/or data, which implies switching between programming paradigms and/or languages.

And these limitations are even more pervading as interactive systems have changed deeply in the last 20 years. They are no longer limited to a simple desktop or laptop computer with a display, a keyboard and a mouse. They are becoming more and more distributed and pervasive (e. g., mobile devices, Internet of Things). They are changing dynamically with recombinations of hardware and software (e. g., transition between multiple devices, modular interactive platforms for collaborative use). Systems are moving “out of the box” with Augmented Reality, and users are going “ inside of the box” with Virtual Reality. This is obviously raising new challenges in terms of human factors, usability and design, but it also deeply questions actual architectures.

2.2.3 The Interaction Machine

We believe that promoting digital devices to empowering tools requires better fundamental knowledge about interaction phenomena AND to revisit the architecture of interactive systems in order to support this knowledge. By following a comprehensive systems approach—encompassing human factors, hardware elements, and all software layers above—we want to define the founding principles of an Interaction Machine:

  • a set of hardware and software requirements with associated specifications for interactive systems to be tailored to interaction by leveraging human skills;
  • one or several implementations to demonstrate and validate the concept and the specifications in multiple contexts;
  • guidelines and tools for designing and implementing interactive systems, based on these specifications and implementations.

To reach this goal, we will adopt an opportunistic and iterative strategy guided by the designeering approach, where the engineering aspect will be fueled by the interaction design and study aspect. We will address several fundamental problems of interaction related to our vision of “empowering tools”, which, in combination with state-of-the-art solutions, will instruct us on the requirements for the solutions to be supported in an interactive system. This consists in reifying the concept of the Interaction Machine into multiple contexts and for multiple problems, before converging towards a more unified definition of “what is an interactive system”, the ultimate Interaction Machine, which constitutes the main scientific and engineering challenge of our project.

3 Research program

Interaction is by nature a dynamic phenomenon that takes place between interactive systems and their users. Redesigning interactive systems to better account for interaction requires fine understanding of these dynamics from the user side so as to better handle them from the system side. In fact, layers of actual interactive systems abstract hardware and system resources from a system and programing perspective. Following our Interaction Machine concept, we are reconsidering these architectures from the user's perspective, through different levels of dynamics of interaction (see Figure 1).

Figure 1

Represents the 3 levels of dynamics of interaction that we consider in our research program.

Figure1: Levels of dynamics of interaction

Considering phenomena that occur at each of these levels as well as their relationships will help us to acquire the necessary knowledge (Empowering Tools) and technological bricks (Interaction Machine) to reconcile the way interactive systems are designed and engineered with human abilities. Although our strategy is to investigate issues and address challenges for all of the three levels, our immediate priority is to focus on micro-dynamics since it concerns very fundamental knowledge about interaction and relates to very low-level parts of interactive systems, which is likely to influence our future research and developments at the other levels.

3.1 Micro-Dynamics

Micro-dynamics involve low-level phenomena and human abilities which are related to short time/instantness and to perception-action coupling in interaction, when the user has almost no control or consciousness of the action once it has been started. From a system perspective, it has implications mostly on input and output (I/O) management.

3.1.1 Transfer functions design and latency management

We have developed a recognized expertise in the characterization and the design of transfer functions  36, 50, i. e., the algorithmic transformations of raw user input for system use. Ideally, transfer functions should match the interaction context. Yet the question of how to maximize one or more criteria in a given context remains an open one, and on-demand adaptation is difficult because transfer functions are usually implemented at the lowest possible level to avoid latency. Latency has indeed long been known as a determinant of human performance in interactive systems  45 and recently regained attention with touch interactions  43. These two problems require cross examination to improve performance with interactive systems: Latency can be a confounding factor when evaluating the effectiveness of transfer functions, and transfer functions can also include algorithms to compensate for latency.

We have proposed new cheap but robust methods for input filtering 3 and for the measurement of end-to-end latency  35 and worked on compensation methods  49 and the evaluation of their perceived side effects 9. Our goal is then to automatically adapt transfer functions to individual users and contexts of use, which we started in  44, while reducing latency in order to support stable and appropriate control. To achieve this, we will investigate combinations of low-level (embedded) and high-level (application) ways to take user capabilities and task characteristics into account and reduce or compensate for latency in different contexts, e. g., using a mouse or a touchpad, a touch-screen, an optical finger navigation device or a brain-computer interface. From an engineering perspective, this knowledge on low-level human factors will help us to rethink and redesign the I/O loop of interactive systems in order to better account for them and achieve more adapted and adaptable perception-action coupling.

3.1.2 Tactile feedback & haptic perception

We are also concerned with the physicality of human-computer interaction, with a focus on haptic perception and related technologies. For instance, when interacting with virtual objects such as software buttons on a touch surface, the user cannot feel the click sensation as with physical buttons. The tight coupling between how we perceive and how we manipulate objects is then essentially broken although this is instrumental for efficient direct manipulation. We have addressed this issue in multiple contexts by designing, implementing and evaluating novel applications of tactile feedback 5.

In comparison with many other modalities, one difficulty with tactile feedback is its diversity. It groups sensations of forces, vibrations, friction, or deformation. Although this is a richness, it also raises usability and technological challenges since each kind of haptic stimulation requires different kinds of actuators with their own parameters and thresholds. And results from one are hardly applicable to others. On a “knowledge” point of view, we want to better understand and empirically classify haptic variables and the kind of information they can represent (continuous, ordinal, nominal), their resolution, and their applicability to various contexts. From the “technology” perspective, we want to develop tools to inform and ease the design of haptic interactions taking best advantage of the different technologies in a consistent and transparent way.

3.2 Meso-Dynamics

Meso-dynamics relate to phenomena that arise during interaction, on a longer but still short time-scale. For users, it is related to performing intentional actions, to goal planning and tools selection, and to forming sequences of interactions based on a known set of rules or instructions. From the system perspective, it relates to how possible actions are exposed to the user and how they have to be executed (i. e., interaction techniques). It also has implication on the tools for designing and implementing those techniques (programming languages and APIs).

3.2.1 Interaction bandwidth and vocabulary

Interactive systems and their applications have an always-increasing number of available features and commands due to, e. g., the large amount of data to manipulate, increasing power and number of functionalities, or multiple contexts of use.

On the input side, we want to augment the interaction bandwidth between the user and the system in order to cope with this increasing complexity. In fact, most input devices capture only a few of the movements and actions the human body is capable of. Our arms and hands for instance have many degrees of freedom that are not fully exploited in common interfaces. We have recently designed new technologies to improve expressibility such as a bendable digitizer pen  38, or reliable technology for studying the benefits of finger identification on multi-touch interfaces  39.

On the output side, we want to expand users' interaction vocabulary. All of the features and commands of a system can not be displayed on screen at the same time and lots of advanced features are by default hidden to the users (e. g., hotkeys) or buried in deep hierarchies of command-triggering systems (e. g., menus). As a result, users tend to use only a subset of all the tools the system actually offers  48. We will study how to help them to broaden their knowledge of available functions.

Through this “opportunistic” exploration of alternative and more expressive input methods and interaction techniques, we will particularly focus on the necessary technological requirements to integrate them into interactive systems, in relation with our redesign of the I/O stack at the micro-dynamics level.

3.2.2 Spatial and temporal continuity in interaction

At a higher level, we will investigate how more expressive interaction techniques affect users' strategies when performing sequences of elementary actions and tasks. More generally, we will explore the “continuity” in interaction. Interactive systems have moved from one computer to multiple connected interactive devices (computer, tablets, phones, watches, etc.) that could also be augmented through a Mixed-Reality paradigm. This distribution of interaction raises new challenges, both in terms of usability and engineering, that we clearly have to consider in our main objective of revisiting interactive systems  47. It involves the simultaneous use of multiple devices and also the changes in the role of devices according to the location, the time, the task, and contexts of use: a tablet device can be used as the main device while traveling, and it becomes an input device or a secondary monitor when resuming that same task once in the office; a smart-watch can be used as a standalone device to send messages, but also as a remote controller for a wall-sized display. One challenge is then to design interaction techniques that support smooth, seamless transitions during these spatial and temporal changes in order to maintain the continuity of uses and tasks, and how to integrate these principles in future interactive systems.

3.2.3 Expressive tools for prototyping, studying, and programming interaction

Current systems suffer from engineering issues that keep constraining and influencing how interaction is thought, designed, and implemented. Addressing the challenges we presented in this section and making the solutions possible require extended expressiveness, and researchers and designers must either wait for the proper toolkits to appear, or “hack” existing interaction frameworks, often bypassing existing mechanisms. For instance, numerous usability problems in existing interfaces stem from a common cause: the lack, or untimely discarding, of relevant information about how events are propagated and how changes come to occur in interactive environments. On top of our redesign of the I/O loop of interactive systems, we will investigate how to facilitate access to that information and also promote a more grounded and expressive way to describe and exploit input-to-output chains of events at every system level. We want to provide finer granularity and better-described connections between the causes of changes (e.g. input events and system triggers), their context (e.g. system and application states), their consequences (e.g. interface and data updates), and their timing 8. More generally, a central theme of our Interaction Machine vision is to promote interaction as a first-class object of the system  34, and we will study alternative and better-adapted technologies for designing and programming interaction, such as we did recently to ease the prototyping of Digital Musical Instruments 2 or the programming of graphical user interfaces 10. Ultimately, we want to propose a unified model of hardware and software scaffolding for interaction that will contribute to the design of our Interaction Machine.

3.3 Macro-Dynamics

Macro-dynamics involve longer-term phenomena such as skills acquisition, learning of functionalities of the system, reflexive analysis of its own use (e. g., when the user has to face novel or unexpected situations which require high-level of knowledge of the system and its functioning). From the system perspective, it implies to better support cross-application and cross-platform mechanisms so as to favor skill transfer. It also requires to improve the instrumentation and high-level logging capabilities to favor reflexive use, as well as flexibility and adaptability for users to be able to finely tune and shape their tools.

We want to move away from the usual binary distinction between “novices” and “experts” 4 and explore means to promote and assist digital skill acquisition in a more progressive fashion. Indeed, users have a permanent need to adapt their skills to the constant and rapid evolution of the tasks and activities they carry on a computer system, but also the changes in the software tools they use  52. Software strikingly lacks powerful means of acquiring and developing these skills 4, forcing users to mostly rely on outside support (e. g., being guided by a knowledgeable person, following online tutorials of varying quality). As a result, users tend to rely on a surprisingly limited interaction vocabulary, or make-do with sub-optimal routines and tools  53. Ultimately, the user should be able to master the interactive system to form durable and stabilized practices that would eventually become automatic and reduce the mental and physical efforts , making their interaction transparent.

In our previous work, we identified the fundamental factors influencing expertise development in graphical user interfaces, and created a conceptual framework that characterizes users' performance improvement with UIs 4, 7. We designed and evaluated new command selection and learning methods to leverage user's digital skill development with user interfaces, on both desktop and touch-based computers 6.

We are now interested in broader means to support the analytic use of computing tools:

  • to foster understanding of interactive systems. As the digital world makes the shift to more and more complex systems driven by machine learning algorithms, we increasingly lose our comprehension of which process caused the system to respond in one way rather than another. We will study how novel interactive visualizations can help reveal and expose the “intelligence” behind, in ways that people better master their complexity.
  • to foster reflexion on interaction. We will study how we can foster users' reflexion on their own interaction in order to encourage them to acquire novel digital skills. We will build real-time and off-line software for monitoring how user's ongoing activity is conducted at an application and system level. We will develop augmented feedbacks and interactive history visualization tools that will offer contextual visualizations to help users to better understand and share their activity, compare their actions to that of others, and discover possible improvement.
  • to optimize skill-transfer and tool re-appropriation. The rapid evolution of new technologies has drastically increased the frequency at which systems are updated, often requiring to relearn everything from scratch. We will explore how we can minimize the cost of having to appropriate an interactive tool by helping users to capitalize on their existing skills.

We plan to explore these questions as well as the use of such aids in several contexts like web-based, mobile, or BCI-based applications. Although, a core aspect of this work will be to design systems and interaction techniques that will be as little platform-specific as possible, in order to better support skill transfer. Following our Interaction Machine vision, this will lead us to rethink how interactive systems have to be engineered so that they can offer better instrumentation, higher adaptability, and fewer separation between applications and tasks in order to support reuse and skill transfer.

4 Application domains

Loki works on fundamental and technological aspects of Human-Computer Interaction that can be applied to diverse application domains.

Our 2023 research involved desktop and mobile interaction, gestural interaction, virtual and extended reality, scientific communication supports, haptics, data visualization and sport analytics. Our technical work contributes to the more general application domains of interactive systems engineering.

5 Social and environmental responsibility

5.1 Footprint of research activities

Since 2022, we have included an estimate of the carbon footprint costs in our provisional travel budget. Although this is not our primary criterion, it at least makes us aware of it and to consider it in our decisions, especially when the events can also be remotely attended.

We favor as much as possible low-footprint transportation methods (train, carpool) for travel. Typically, all LOKI members who attended the flagship HCI conference (ACM CHI 2023) in Hamburg went by carpool or train, even though flying was a faster option. We also avoid travelling for long-distance conferences when not necessary. As an example, the ACM CHI 2024 conference will be held in Hawaii, but no LOKI member will attend unless they have to physically present a scientific contribution.

5.2 Impact of research results

Aurélien Tabard participated in the creation of a working group on ecology and sustainability supported by the french association for HCI (AFIHM - IHM-Écologie).

6 Highlights of the year

Stéphane Huot was appointed as Director of the Inria center at the University of Lille (December 19th 2023). Congrats Stéphane!

6.1 Awards

Best paper award (top 1%) from the CHI'23 ACM conference for the paper “ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector Charts”, from Damien Masson, Sylvain Malacria, Daniel Vogel, Edward. Lank and Géry Casiez. 25.

Best paper honorable mention award (top 4%) from the MobileHCI'23 ACM conference for the paper “Exploring Visual Signifier Characteristics to Improve the Perception of Affordances of In-Place Touch Inputs”, from Eva Mackamul, Géry Casiez and Sylvain Malacria 22.

7 New software, platforms, open data

7.1 New software

7.1.1 BoxingCadence

  • Name:
    Annotation Tool to Identify Hits in Boxing Videos
  • Keywords:
    Human Computer Interaction, Video annotation, JavaScript, Annotation tool, Sport
  • Functional Description:
    The video time can be controlled through the mouse or keyboard with a frame granularity. Annotations for each athlete can be associated to the current frame by pressing a keyboard key. The tool visualizes all annotations on a timeline below the video, and clicking on one of them jumps to the associated frame in the video.
  • Release Contributions:
    Adaption of the source code to use a MongoDB to store and read annotations between several applications
  • News of the Year:

    We updated the software to use a MongoDB that stores all annotations to facilitate sharing them across an ecosystem of interactive applications the French boxing federation can use without our direct help.

    We also wrote an article in The Conversation to explain how this tool is leveraged by the federation.

  • Publication:
  • Contact:
    Bruno Fruchard
  • Participant:
    Bruno Fruchard

7.1.2 ReTracer

  • Keywords:
    Command history, Undo
  • Functional Description:

    ReTracer is a command history architecture, based on the ESCI and Causality models. It can be used to implement advanced command histories in editing software.

    It was developed by Philippe Schmid during his PhD in the Loki Lab (Lille).

  • URL:
  • Author:
    Philippe Schmid
  • Contact:
    Mathieu Nancel

7.1.3 Polyphony

  • Name:
    Polyphony
  • Keywords:
    Human Computer Interaction, Toolkit, Engineering of Interactive Systems
  • Functional Description:
    Polyphony is an experimental toolkit demonstrating the use of Entity-Component-System (ECS) to design Graphical User Interfaces (GUI) on web technologies (HTML canvas or SVG). It also extends the original ECS model to support advanced interfaces.
  • News of the Year:
    Raphaël James worked on a deep cleanup and improvements to the implementation: - migrated display and input support from SDL to web technologies (HTML canvas and SVG DOM) - architecture changes to improve modularity and testability - support multiple instances and multiple canvases on a single web page - implement an interactive notebook-like tutorial, and document the implementation and API
  • URL:
  • Publications:
  • Contact:
    Damien Pollet
  • Participants:
    Thibault Raffaillac, Stephane Huot, Damien Pollet, RaphaËl James

7.1.4 ClimbingAnalysis

  • Name:
    Interactive Analytical Tool for Lead Climbing Data
  • Keywords:
    Human Computer Interaction, Sport, Data analysis
  • Functional Description:
    This tool has been designed in collaboration with the Fédération Française de la Montagne et de l'Escalade (FFME) to analyze data from high-level athletes captured using a video annotation tool. It enables data to be visualized using a number of graphs focusing on different types of data, such as hold times and climbing speed. Depending on the type of analysis required, the tool can be used to compare several athletes on the same route, or to analyze an athlete's performances over several competitions.
  • News of the Year:
    Publication of a scientific paper at the French speaking conference IHM'24 (https://ihm2024.afihm.org/) that reports on the tool functionalities, along with a demonstration during the conference.
  • Contact:
    Bruno Fruchard
  • Participants:
    Bruno Fruchard, Timo Maszewski
  • Partner:
    Fédération Française de la Montagne et de l'Escalade (FFME)

7.1.5 ClimbingAnnotation

  • Name:
    Video annotation tool for sport performances
  • Keywords:
    Human Computer Interaction, JavaScript, Sport, Annotation tool, Video annotation
  • Functional Description:
    The tool enables to view a video and to add frame-precise annotations to specify the start and end of significant actions. It was designed originally to study lead climbing videos and enables to annotate actions such as grasping or releasing a hold, the athlete's energy consumption, or athletes' and trainers' comments. The annotations can be entered through a sequencer buttons or using keyboard shortcuts. As soon as annotations are entered, the tool aggregate them automatically using plots to visualize them and facilitate the interpretation of a performance results. Plots depict, for instance, the average holding time per hand or the evolution of the score over a time interval.
  • News of the Year:
    Publication of a case study of the design and implementation of this tool in collaboration with the FFME.
  • Publication:
  • Author:
    Bruno Fruchard
  • Contact:
    Bruno Fruchard
  • Partner:
    Fédération Française de la Montagne et de l'Escalade (FFME)

7.1.6 OpenSCAD_BP

  • Name:
    OpenSCAD version supporting bidirectional programming
  • Keywords:
    OpenSCAD, Bidirectional programming
  • Functional Description:
    The modified version of OpenSCAD supports integrated navigation and editing through interactions with the view. It supports reverse search by interaction with the elements in the view and highlighting the corresponding lines of code. It also supports forward search by highlighting a part in the view corresponding to a line of code. Editing consists in translating and rotating objects in the view and updating the corresponding lines of code accordingly.
  • URL:
  • Authors:
    Johann Gonzalez Avila, Danny Kieken, Gery Casiez, Thomas Pietrzak, Audrey Girouard
  • Contact:
    Gery Casiez

8 New results

According to our research program, we have studied dynamics of interaction along three levels depending on interaction time scale and related user's perception and behavior: Micro-dynamics, Meso-dynamics, and Macro-dynamics. Considering phenomena that occur at each of these levels as well as their relationships will help us acquire the necessary knowledge (Empowering Tools) and technological bricks (Interaction Machine) to reconcile the way interactive systems are designed and engineered with human abilities. Our strategy is to investigate issues and address challenges at all three levels of dynamics in order to contribute to our longer term objective of defining the basic principles of an Interaction Machine.

8.1 Micro-dynamics

Participants: Géry Casiez [contact person], Yuan Chen, Alice Loizeau, Sylvain Malacria, Mathieu Nancel, Thomas Pietrzak, Gréboire Richard, Philippe Schmid.

8.1.1 MultiVibes: What if your VR Controller had 10 Times more Vibrotactile Actuators?

Consumer-grade virtual reality (VR) controllers are typically equipped with one vibrotactile actuator, allowing to create simple and non-spatialized tactile sensations through the vibration of the entire controller. Leveraging the funneling effect, an illusion in which multiple vibrations are perceived as a single one, we propose MultiVibes (fig:multivibe), a VR controller capable of rendering spatialized sensations at different locations on the user's hand and fingers 27. The designed prototype includes ten vibrotactile actuators, directly in contact with the skin of the hand, limiting the propagation of vibrations through the controller. We evaluated MultiVibes through two controlled experiments. The first one focused on the ability of users to recognize spatio-temporal patterns, while the second one focused on the impact of MultiVibes on the users' haptic experience when interacting with virtual objects they can feel. Taken together, the results show that MultiVibes is capable of providing accurate spatialized feedback and that users prefer MultiVibes over recent VR controllers.

Figure 2

multivibe

Figure2: (a) MultiVibes prototype comprising 10 actuators in contact with the skin that can be controlled individually in amplitude and frequency. (b) Interaction with the edge of a virtual cube. (c) As the user slides the controller along the edge of the cube, they feels a phantom vibration point moving inside their hand according to the point of contact. The vibration point location is determined by the three closest actuators, controlled using our funneling model.

8.1.2 Reliability of on-line visual feedback influences learning of continuous motor task of healthy young adults

A continuous task was used to determine how the reliability of on-line visual feedback during acquisition impacts motor learning 11. Participants performed a right hand pointing task of a repeated sequence with a visual cursor that was either reliable, moderately unreliable, or largely unreliable. Delayed retention tests were administered 24h later, as well as intermanual transfer tests (performed with the left hand). A visuospatial transfer test was performed with the same targets' sequence (same visuospatial configuration) while a motor transfer test was performed with the visual mirror of the targets' sequence (same motor patterns). Results showed that pointing was slower and long-term learning disrupted in the largely unreliable visual cursor condition, compared with the reliable and moderately unreliable conditions. Also, analysis of transfers revealed classically better performance on visuospatial transfer than on motor transfer for the reliable condition. However, here we first show that such difference disappears when the cursor was moderately or largely unreliable. Interestingly, these results indicated a difference in the type of sequence coding, depending on the reliability of the on-line visual feedback. This recourse to mixed coding opens up interesting perspectives, as it is known to promote better learning of motor sequences.

8.1.3 Use of variable online visual feedback to optimize sensorimotor coding and learning of a motor sequence

We characterized the impact of reliable and/or unreliable online visual feedback and their order of presentation on the coding and learning of a motor sequence 12. Participants practiced a 12-element motor sequence 200 times. During this acquisition phase, two groups received a single type (i.e., either reliable or unreliable) of online visual feedback, two other groups encountered both types of feedback: either reliable first then unreliable, or unreliable first then reliable. Delayed retention tests and intermanual transfer tests (visuospatial and motor) were administered 24 hours later. Results showed that varying the reliability of online visual information during the acquisition phase allowed participants to use different task coding modalities without damaging their long-term sequence learning. Moreover, starting with reliable visual feedback, replaced halfway through with unreliable feedback promoted motor coding, which is seldom observed. This optimization of motor coding opens up interesting perspectives, as it is known to promote better learning of motor sequences.

8.1.4 Exploring the Effects of Intended Use on Targeting in Virtual Reality

Researchers have shown that distance and size are not the only factors that impact the target acquisition time in desktop interfaces, but that its intended use, whether it is selected, dragged, or otherwise manipulated, can also have a significant influence. However, despite the increasing popularity of virtual 3D environments, the intended use of targets in these contexts has never been investigated, in spite of the richer, multidimensional manipulations afforded by these environments. To better understand the effects of intended use on target acquisition in virtual environments, we ran a study examining five different manipulation tasks: targeting, dual-targeting, throwing, docking and reorienting 15. Our results demonstrate that the intended use of a target affects its acquisition time and, correspondingly, the movement towards the target. As these environments become more commonplace settings for work and play, our work provides valuable information on throughput, applicable to a wide range of tasks.

8.2 Meso-dynamics

Participants: Géry Casiez, Bruno Fruchard, Felipe Gonzalez, Stéphane Huot, Edward Lank, Alice Loizeau, Sylvain Malacria, Damien Masson, Mathieu Nancel, Thomas Pietrzak [contact person], Damien Pollet, Marcelo Wanderley, Travis West.

8.2.1 Introducing Bidirectional Programming in Constructive Solid Geometry-Based CAD

3D Computer-Aided Design (CAD) users need to overcome several obstacles to benefit from the flexibility of programmatic interface tools. Besides the barriers of any programming language, users face challenges inherent to 3D spatial interaction. Scripting simple operations, such as moving an element in 3D space, can be significantly more challenging than performing the same task using direct manipulation. We introduce the concept of bidirectional programming for Constructive Solid Geometry (CSG) CAD tools (fig:bidirectional-cad), informed by interviews we performed with programmatic interface users 19. We describe how users can navigate and edit the 3D model using direct manipulation in the view or code editing while the system ensures consistency between both spaces. We also detail a proof-of-concept implementation using a modified version of OpenSCAD.

Figure 3

bidirectional-cad

Figure3: Bidirectional Programming features implemented in OpenSCAD. The system allows to navigate the code through direct manipulation in the view (reverse search) and vice versa (forward search). Also, the program enables modification of the 3D model from the view while the system updates the code coherently.

8.2.2 ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector Charts

Extracting underlying data from rasterized charts is tedious and inaccurate; values might be partially occluded or hard to distinguish, and the quality of the image limits the precision of the data being recovered. To address these issues, we introduce a semi-automatic system leveraging vector charts to extract the underlying data easily and accurately (Figure 425. The system is designed to make the most of vector information by relying on a drag-and-drop interface combined with selection, filtering, and previsualization features. A user study showed that participants spent less than 4 minutes to accurately recover data from charts published at CHI with diverse styles, thousands of data points, a combination of different encodings, and elements partially or completely occluded. Compared to other approaches relying on raster images, our tool successfully recovered all data, even when hidden, with a 78% lower relative error.

Figure 4

chartdetective

Figure4: ChartDetective is a system capable of recovering a chart's underlying data by leveraging its vector representation. Users select a vector chart and then (A, B) drag-and-drop elements that they which to extract. (C) The extracted data can be leveraged for downstream tasks such redesigning or interacting with the figure.

8.2.3 Charagraph: Interactive Generation of Charts for Realtime Annotation of Data-Rich Paragraphs

Documents often have paragraphs packed with numbers that are difficult to extract, compare, and interpret. To help readers make sense of data in text, we introduce the concept of Charagraphs: dynamically generated interactive charts and annotations for in-situ visualization, comparison, and manipulation of numeric data included within text 23. Three Charagraph characteristics are defined: leveraging related textual information about data; integrating textual and graphical representations; and interacting at different contexts. We contribute a document viewer to select in-text data; generate and customize Charagraphs; merge and refine a Charagraph using other in-text data; and identify, filter, compare, and sort data synchronized between text and visualization (Figure 5). Results of a study show participants can easily create Charagraphs for diverse examples of data-rich text, and when answering questions about data in text, participants were more correct compared to only reading text.

Figure 5

charagraph

Figure5: (a) Charagraphs are in-situ visualizations of numeric data included within text that are dynamically generated (b) by delimiting a selection and (c) selecting a data group. Charagraphs support common data exploration tasks through interactive features such as (d) identifying and (e) comparing values.

8.2.4 User Preference and Performance using Tagging and Browsing for Image Labeling

Visual content must be labeled to facilitate navigation and retrieval, or provide ground truth data for supervised machine learning approaches. The efficiency of labeling techniques is crucial to produce numerous qualitative labels, but existing techniques remain sparsely evaluated. We systematically evaluate the efficiency of tagging and browsing tasks in relation to the number of images displayed, interaction modes, and the image visual complexity (Figure 618. Tagging consists in focusing on a single image to assign multiple labels (image-oriented strategy), and browsing in focusing on a single label to assign to multiple images (label-oriented strategy). In a first experiment, we focus on the nudges inducing participants to adopt one of the strategies (n=18). In a second experiment, we evaluate the efficiency of the strategies (n=24). Results suggest an image- oriented strategy (tagging task) leads to shorter annotation times, especially for complex images, and participants tend to adopt it regardless of the conditions they face.

Figure 6

tagging-browsing

Figure6: Image labeling tools rely on two primary approaches: a) tagging a single image with labels, or browsing all images to assign a single label. b) We characterize the labeling task and systematically study the efciency of these approaches by measuring the performance of annotators when counting shapes in images (circles are distractors). c) Annotators can select possible shape counts (labels) using toggle buttons to tag corresponding images. Through this setting we study what strategy users adopt when they have the choice and evaluate their efciency (outlined shapes represent the annotators' targets).

8.3 Macro-dynamics

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot, Eva Mackamul, Sylvain Malacria [contact person], Mathieu Nancel, Aurélien Tabard.

8.3.1 Studying the Visual Communication of Input Possibilities

Presenting interaction possibilities is essential to foster the discovery of interaction, and yet, modern interactive systems tend to adopt minimalist interfaces that do not inform users on how they can interact. We explored this issue for two-distinct types of input possibilities.

First, we investigated how to best graphically represent microgestures (Figure 7). In that respect, we created 21 designs, each depicting static and dynamic versions of 4 commonly used microgestures (tap, swipe, flex and hold – Figure 720. We first studied these designs in a quantitative online experiment with 45 participants. We then conducted a qualitative laboratory experiment in Augmented Reality with 16 participants. Based on the results, we provide design guidelines on which elements of a microgesture should be represented and how. In particular, it is recommended to represent the actuator and the trajectory of a microgesture. Also, although preferred by users, dynamic representations are not considered better than their static counterparts for depicting a microgesture and do not necessarily result in a better user recognition

Figure 7

microgestures

Figure7: Single-picture representations of microgestures. As part of microgesture learning strategies in an interactive system, such representations can be used in (a) various contexts, such as research papers and Augmented Reality applications. This study integrates (b) 4 microgestures, namely tap, hold, swipe and flex and proposes 21 families of representations sharing a common design among the microgestures. (c) These 21 families were tested in an online experiment, the 4 top ranked were further tested with an AR headset. Design guidelines for the representation of microgestures emerge from these two experiments.

We then explored this issue on touch-based interfaces. Indeed, modern touch screens support different inputs such as 'Tap', 'Dwell', 'Double Tap' and 'Force Press' that are not visually signified to the user, and therefore remain unknown or underutilised. We proposed a design space of visual signifier characteristics that may impact the perception of in-place one finger inputs 22. We generated 36 designs and investigated their perception in an online survey (N=32) and an interactive experiment (N=24). The results suggest that visual signifiers increase the perception of input possibilities beyond 'Tap', and reduce perceived mental effort for participants, who also prefer added visual signifiers over a baseline. Our work informs how future touch-based interfaces could be designed to better communicate in-place single finger input possibilities.

8.3.2 A Case Study on the Design and Use of an Annotation and Analytical Tool Tailored To Lead Climbing

Annotating sport performances enables to quantitatively and qualitatively analyze them, and profile athletes to identify their strengths and weaknesses. We ran a case study of the design and use of an annotation and analytical tool tailored to lead climbing analysis, developed with and for the French climbing federation 16. We used an iterative design cycle mostly fueled by virtual meetings with the federation trainer and analyst to identify requirements and implement essential features over time. We complemented these meetings with two workshops involving them, as well as French athletes competing at the international level, to identify the tool advantages and limitations. We contribute a list of insights based on the design process and feedback from stakeholders that inform the design of annotation and analytical tools for lead climbing and potentially other sports. We demonstrated the annotation tool produced as a result of this work at the IHM'23 French speaking conference 17.

8.3.3 Statslator: Interactive Translation of NHST and Estimation Statistics Reporting Styles in Scientific Documents

Inferential statistics are typically reported using p-values (NHST) or confidence intervals on effect sizes (estimation). This is done using a range of styles, but some readers have preferences about how statistics should be presented and others have limited familiarity with alternatives. We proposed a system to interactively translate statistical reporting styles in existing documents, allowing readers to switch between interval estimates, p-values, and standardized effect sizes, all using textual and graphical reports that are dynamic and user customizable 24 (Figure 8). Forty years of CHI papers are examined. Using only the information reported in scientific documents, equations are derived and validated on simulated datasets to show that conversions between p-values and confidence intervals are accurate. The system helps readers interpret statistics in a familiar style, compare reports that use different styles, and even validate the correctness of reports. Code and data available here.

Figure 8

statslator

Figure8: Statslator takes existing statistical reports (a) using NHST or estimation; (b) calculates all possible statistical values using accurate conversion equations; (c) shows the report using graphical and interactive figures configurable by readers.

8.3.4 Obsolescence Paths: living with aging devices

Frequent renewal of digital devices accounts for a large share of their environmental impact because of fabrication environmental costs. This renewal is often attributed to sociocultural phenomena (e.g. presentation of self or persuasive marketing) and to broken hardware (e.g. shattered screens or degraded batteries). We investigate a complementary aspect: how people live with devices as they are gradually becoming obsolete. We present a qualitative interview-based study with 18 participants on the role of software factors on the feeling of smartphone obsolescence 26. We identify three types of factors pushing for device renewal: upgrade issues, storage issues, and malfunctions. We find that these issues accumulate over time until a threshold is passed leading to renewal: we define this process as an obsolescence path. This threshold is often tied to contextual and social concerns. We also outline the various strategies people use to prolong the life of the almost obsolete devices. Our results show that hardware and software obsolescence are tied, and should be considered together as they trace obsolescence paths. Based on these observations, we identify design opportunities to extend the lifespan of devices.

8.4 Interaction Machine

Participants: Géry Casiez, Bruno Fruchard, Stéphane Huot [contact person], Sylvain Malacria, Mathieu Nancel, Thomas Pietrzak, Damien Pollet, Philippe Schmid.

8.4.1 Signifidgets: how to adapt widgets so they visually communicate the interactions they support

GUIs are composed of different interactive graphical components, or widgets, with which users can interact to specify their intentions to the system. These widgets are diverse, ranging from simple selectable buttons to range sliders. They have a pre-defined representation and react by default to a list of basic user inputs. Existing programming toolkits use a strong coupling between a widget, its appearance, and the user inputs it supports. Signifidgets are a reflection on how programming APIs could (and should?) approach the use of widgets, allowing programmers to add different possible user inputs to a component, and modify its behavior and appearance to signify and react to them (Figure 914.

Figure 9

signifidgets

Figure9: Example of programming a Signifidget and its associated appearance. 1- a Signifidget is instantiated and appears disabled; callbacks to support clicks (2-) and double clicks (3-) are added, resulting in the Signifidget's appearance being systematically updated to reflect support for these inputs and suggest them to the user

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Participation in other International Programs

Réapp

Participants: Géry Casiez, Sylvain Malacria, Yuan Chen.

  • Title:
    Reappearing Interfaces in Ubiquitous Environments
  • Partner Institution(s):
    Université de Lille, France and University of Waterloo, Canada
  • Date/Duration:
    2019 - 2023
  • Additionnal info/keywords:

    The LAI Réapp is a Université de Lille - International Associated Laboratory between Loki and Cheriton School of Computer Science from the University of Waterloo in Canada. It is funded by the University of Lille to ease shared student supervision and regular inter-group contacts (with Edward Lank, Daniel Vogel & Keiko Katsuragawa at University of Waterloo). The partners universities also co-funded the co-tutelle Ph.D. thesis of Yuan Chen.

    We are at the dawn of the next computing paradigm where everything will be able to sense human input and augment its appearance with digital information without using screens, smartphones, or special glasses—making user interfaces simply disappear. This introduces many problems for users, including the discoverability of commands and use of diverse interaction techniques, the acquisition of expertise, and the balancing of trade-offs between inferential (AI) and explicit (user-driven) interactions in aware environments. We argue that interfaces must reappear in an appropriate way to make ubiquitous environments useful and usable. This project tackles these problems, addressing (1) the study of human factors related to ubiquitous and augmented reality environments, and the development of new interaction techniques helping to make interfaces reappear; (2) the improvement of transition between novice and expert use and optimization of skill transfer; and, last, (3) the question of delegation in smart interfaces, and how to adapt the trade-off between implicit and explicit interaction.

    Joint publications in 2023: 15, 23, 25, 24

9.2 International research visitors

9.2.1 Visits to international teams

Research stays abroad
Eva Mackamul
  • Visited institution:
    University of Toronto
  • Country:
    Canada
  • Dates:
    01/04 to 14/08
  • Context of the visit:
    Eva Mackamul visited Prof. Fanny Chevalier at the Dynamic Graphics Project for four months and a half in order to work on the discoverability of touch-based inputs. Her visit resulted in a research paper that will be submitted to the ACM DIS 24 conference.
  • Mobility program/type of mobility:
    Funded by the MITACS GLOBALINK and MOBilité - LILle - EXcellence programs
Sylvain Malacria

9.3 Informal International Partners

  • Fanny Chevalier, University of Toronto, Ontario, CA

    visual communication of input possibilities on touch-screens

  • Scott Bateman, University of New Brunswick, Fredericton, CA

    interaction in 3D environments (VR, AR)

  • Audrey Girouard, Carleton University, Ottawa, CA

    interactions for digital fabrication 19 (co-tutelle thesis of Johann Felipe Gonzalez Avila)

  • Daniel Vogel, University of Waterloo, Waterloo, CA

    spatially augmented reality 15 (co-tutelle thesis of Yuan Chen) and polymorphic documents 25, 24, 23 (co-supervision of Damien Masson's Ph.D. thesis)

9.4 National initiatives

9.4.1 ANR

Causality (JCJC, 2019-2024)

Integrating Temporality and Causality to the Design of Interactive Systems

Participants: Géry Casiez, Stéphane Huot, Alice Loizeau, Sylvain Malacria, Mathieu Nancel [contact person], Philippe Schmid.

The project addresses a fundamental limitation in the way interfaces and interactions are designed and even thought about today, an issue we call procedural information loss: once a task has been completed by a computer, significant information that was used or produced while processing it is rendered inaccessible regardless of the multiple other purposes it could serve. It hampers the identification and solving of identifiable usability issues, as well as the development of new and beneficial interaction paradigms. We will explore, develop, and promote finer granularity and better-described connections between the causes of those changes, their context, their consequences, and their timing. We will apply it to facilitate the real-time detection, disambiguation, and solving of frequent timing issues related to human reaction time and system latency; to provide broader access to all levels of input data, therefore reducing the need to "hack" existing frameworks to implement novel interactive systems; and to greatly increase the scope and expressiveness of command histories, allowing better error recovery but also extended editing capabilities such as reuse and sharing of previous actions.

Web site: Causality

Related publications in 2023: 31

Discovery (JCJC, 2020-2024)

Promoting and improving discoverability in interactive systems

Participants: Géry Casiez, Eva Mackamul, Sylvain Malacria [contact person], Raphaël Perraud.

This project addresses a fundamental limitation in the way interactive systems are usually designed, as in practice they do not tend to foster the discovery of their input methods (operations that can be used to communicate with the system) and corresponding features (commands and functionalities that the system supports). Its objective is to provide generic methods and tools to help the design of discoverable interactive systems: we will define validation procedures that can be used to evaluate the discoverability of user interfaces, design and implement novel UIs that foster input method and feature discovery, and create a design framework of discoverable user interfaces. This project investigates, but is not limited to, the context of touch-based interaction and will also explore two critical timings when the user might trigger a reflective practice on the available inputs and features: while the user is carrying her task (discovery in-action); and after having carried her task by having informed reflection on her past actions (discovery on-action). This dual investigation will reveal more generic and context-independent properties that will be summarized in a comprehensive framework of discoverable interfaces. Our ambition is to trigger a significant change in the way all interactive systems and interaction techniques, existing and new, are thought, designed, and implemented with both performance and discoverability in mind.

Web site: Discovery

Related publications in 2023: 22, 25, 20, 14

PerfAnalytics (PIA “Sport de très haute performance”, 2020-2024)

In situ performance analysis

Participants: Géry Casiez, Bruno Fruchard [contact person], Stéphane Huot, Sylvain Malacria, Timo Maszewski.

The objective of the PerfAnalytics project (Inria, INSEP, Univ. Grenoble Alpes, Univ. Poitiers, Univ. Aix-Marseille, Univ. Eiffel & 5 sports federations) is to study how video analysis, now a standard tool in sport training and practice, can be used to quantify various performance indicators and deliver feedback to coaches and athletes. The project, supported by the boxing, cycling, gymnastics, wrestling, and mountain and climbing federations, aims to provide sports partners with a scientific approach dedicated to video analysis, by coupling existing technical results on the estimation of gestures and figures from video with scientific biomechanical methodologies for advanced gesture objectification (muscular for example).

Partners: the project involves several academic partners (Inria, INSEP, Univ. Grenoble Alpes, Univ. Poitiers, Univ. Aix-Marseille, Univ. Eiffel), as well as elite staff and athletes from different Olympic disciplines (Climbing, BMX Race, Gymnastics, Boxing and Wrestling).

Web site: PerfAnalytics

Related publications in 2023: 16, 17, 18

MIC (PRC, 2022-2026)

Microgesture Interaction in Context

Participants: Suliac Lavenant, Sylvain Malacria, Thomas Pietrzak [contact person].

MIC aims at studying and promoting microgesture-based interaction by putting it in practice in real-life use situations. Microgestures are hand gestures performed on one hand with the same hand. Examples include tap and swipe gestures performed by one finger on another finger. We study interaction techniques based on microgestures or on the combination of microgestures with another modality including haptic feedback as well as mechanisms that support discoverability and learnability of microgestures.

Partners: Univ. Grenoble Alpes, Inria, Univ. Toulouse 2, CNRS, Institut des Jeunes Aveugles, Immersion SA.

Web site: MIC

Related publications in 2023: 20

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees
  • IHM: Sylvain Malacria (Late Breaking Work co-chair), Aurélien Tabard (Late Breaking Work co-chair), Géry Casiez (Doctoral consortium member)
  • Groupe GL-IHM: Damien Pollet (co-animator)
  • Creation of the IHM-Écologie working group: Aurélien Tabard (co-animator)

10.1.2 Scientific events: selection

Member of the conference program committees
  • CHI (ACM): Mathieu Nancel, Sylvain Malacria
  • DIS (ACM): Aurélien Tabard
  • IHM: Géry Casiez
  • UIST (ACM): Mathieu Nancel, Géry Casiez
  • VR (IEEE): Thomas Pietrzak
Reviewer
  • CHI (ACM): Géry Casiez, Bruno Fruchard, Aurélien Tabard, Thomas Pietrzak
  • DIS (ACM): Sylvain Malacria
  • EICS (ACM): Aurélien Tabard
  • HHAI: Mathieu Nancel
  • ISMAR (IEEE): Mathieu Nancel, Thomas Pietrzak
  • Interact: Sylvain Malacria
  • SIGGRAPH Asia: Bruno Fruchard
  • UIST (ACM): Sylvain Malacria, Thomas Pietrzak
  • TEI (ACM): Aurélien Tabard
  • VIS (IEEE): Mathieu Nancel, Aurélien Tabard
  • VR (IEEE): Géry Casiez

10.1.3 Journal

Reviewer - reviewing activities
  • IJHCS: Mathieu Nancel, Sylvain Malacria, Géry Casiez
  • IMWUT: Géry Casiez
  • ToCHI: Aurélien Tabard

10.1.4 Invited talks

  • Interaction beyond the sensorimotor loop”, séminaire du Loria, Nancy : Thomas Pietrzak
  • Aspects sensorimoteurs de l'Interaction en Réalité Virtuelle et Augmentée”, séminaire d'iCube, Strasbourg : Thomas Pietrzak
  • Exploitation de la boucle sensorimotrice pour la conception de systèmes interactifs”, séminaire du LCMOS, Metz : Thomas Pietrzak
  • Do we need to `recall` an efficient interaction?”, UCLIC research seminar - University College London, London : Sylvain Malacria
  • Communicating and Increasing Interactivity in Research Illustrations” - University of Tokyo and KEIO University, Tokyo: Sylvain Malacria

10.1.5 Leadership within the scientific community

10.1.6 Scientific expertise

10.1.7 Research administration

For Inria

  • Evaluation Committee: Stéphane Huot (member)
  • PEPR eNSEMBLE (~40M€ national research program): Stéphane Huot (program director for Inria)

For Inria center at the University of Lille

  • Direction Board: Stéphane Huot (Head of Science)
  • “Commission des Utilisateurs des Moyens Informatique” (CUMI): Mathieu Nancel (president)
  • “Commission des Emplois de Recherche” (CER): Stéphane Huot (member), Sylvain Malacria (member until October 2023)
  • “Commission de Développement Technologique” (CDT): Stéphane Huot (member)
  • “Comité Opérationnel d'Évaluation des Risques Légaux et Éthiques” (COERLE, the Inria Ethics board): Thomas Pietrzak (local correspondent)

For the Université de Lille

  • MADIS Graduate School council: Géry Casiez (member)
  • Computer Science Department commission mixte: Thomas Pietrzak (member)
  • Coordinator for internships at IUT de Lille: Géry Casiez
  • Co-coordinator for internships at Computer Science Deparment: Damien Pollet
  • Deputy director of the Computer Science Deparment for finances: Thomas Pietrzak

For the CRIStAL lab of Université de Lille & CNRS

  • Direction Board: Géry Casiez (Deputy Director)
  • Computer Science PhD recruiting committee: Géry Casiez (member)
  • Laboratory council: Sylvain Malacria (member)
  • Coordinator of the Human & Humanities research axis: Thomas Pietrzak

Hiring committees

  • Inria's committee for Senior Researcher Positions (DR2): Stéphane Huot (member)
  • Inria's committees for Junior Researcher Positions (CRCN/ISFP) in Rennes: Stéphane Huot (member)
  • Université Grenoble Alpes repyramidage committee: Géry Casiez (president)
  • Centrale Lille committee for Assistant Professor Position in Control Theory: Géry Casiez (member)

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

  • Master Informatique: Géry Casiez (12h), Mathieu Nancel (12h), Sylvain Malacria (12h), Thomas Pietrzak (12h), Interactions Humain-Machine avancées, M2, Université de Lille
  • Master Informatique: Thomas Pietrzak (48h), Sylvain Malacria (48h), Interaction Humain-Machine, M1, Université de Lille
  • Master Informatique: Thomas Pietrzak (20h), Initiation à l'Innovation et à la Recherche, M1, Université de Lille
  • Licence Informatique: Thomas Pietrzak (40h), Sylvain Malacria (4h), Bruno Fruchard (18h), Alice Loizeau (18h), Raphaël Perraud (18h), Introduction à l'Interaction Humain-Machine, L3, Université de Lille
  • Doctoral course: Géry Casiez (12h), Experimental research and statistical methods for Human-Computer Interaction, Université de Lille
  • BUT Informatique: Géry Casiez (38h), Grégoire Richard (28h), Bruno Fruchard (30h), IHM, 1st year, IUT de Lille - Université de Lille
  • BUT Informatique: Grégoire Richard (36h), BDD, 1st year, IUT de Lille - Université de Lille
  • BUT Informatique: Grégoire Richard (89h), Algorithmes et Programmation, 1st year, IUT de Lille - Université de Lille
  • BUT Informatique: Géry Casiez (11h): Automatisation de la chaîne de production, 3rd year, IUT de Lille - Université de Lille
  • Cursus ingénieur: Sylvain Malacria (9h), 3DETech, IMT Lille-Douai
  • Licence Informatique: Bruno Fruchard (31,5h), Technologies Web, L1, Université de Lille
  • Licence Informatique: Damien Pollet (36h), Alice Loizeau (21h), Informatique, L1, Université de Lille
  • Licence Informatique: Damien Pollet (18h), Conception orientée objet, L3, Université de Lille
  • Licence Informatique: Damien Pollet (21h), Programmation des systèmes, L3, Université de Lille
  • Master Informatique: Damien Pollet (27h), Langages et Modèles Dédiés, M2, Université de Lille

10.2.2 Supervision

  • PhD in progress: Suliac Lavenant, Using haptic cues to improve micro-gesture interaction, started Oct. 2023, advised by Thomas Pietrzak, Sylvain Malacria, Laurence Nigay & Alix Goguey
  • PhD in progress: Vincent Lambert, Discoverability and representation of interactions using micro-hand gestures, started in September 2022, advised by Laurence Nigay, Sylvain Malacria & Alix Goguey
  • PhD in progress: Raphaël Perraud, Fostering the discovery of interactions through adapted tutorials, started Nov. 2022, advised by Sylvain Malacria
  • PhD in progress: Alice Loizeau, Understanding and designing around error in interactive systems, started Oct. 2021, advised by Stéphane Huot & Mathieu Nancel
  • PhD in progress: Pierrick Uro, Studying the sense of co-presence in Augmented Reality, started in September 2021, advised by Thomas Pietrzak, Florent Berthaut, Laurent Grisoni & Marcelo Wanderley (co-tutelle with McGill University, Canada)
  • PhD in progress: Yuan Chen, Adaptive Interactions on Surfaces with an Augmented Lamp, started Dec. 2020, advised by Géry Casiez, Sylvain Malacria & Daniel Voguel Lank (co-tutelle with University of Waterloo, Canada)
  • PhD in progress: Travis West, Examining the Design of Musical Interaction: The Creative Practice and Process, started Oct. 2020, advised by Stéphane Huot & Marcelo Wanderley (co-tutelle with McGill University, Canada)
  • PhD in progress: Johann Felipe González Ávila, Improving 3D design for personal fabrication, started Sep. 2020, advised by Géry Casiez, Thomas Pietrzak & Audrey Girouard (co-tutelle with Carleton University, Canada)
  • PhD: Eva Mackamul, Towards a Better Discoverability of Interactions in Graphical User Interfaces 28, defended in Dec. 2023, advised by Géry Casiez & Sylvain Malacria
  • PhD: Grégoire Richard, Touching Avatars : Role of Haptic Feedback during Interactions with Avatars in Virtual Reality 30, defended in June 2023, advised by Géry Casiez & Thomas Pietrzak
  • PhD: Philippe Schmid, Command History as a Full-fledged Interactive Object 31, defended in June 2023, advised by Stéphane Huot & Mathieu Nancel

10.2.3 Juries

  • Sylvain Malacria (HDR 29, Université de Lille): Stéphane Huot, examiner & sponsor
  • Adrien Chaffangeon Caillet (PhD, Université Grenoble Alpes): Géry Casiez, reviewer
  • Arthur Fages (PhD, Université Paris-Saclay): Géry Casiez, reviewer
  • Laura Pruszko (PhD, Université Grenoble Alpes): Thomas Pietrzak, reviewer

10.2.4 PhD mid-term evaluation committees

  • Julien Cauquis (IMT Atlantique): Géry Casiez
  • Victor Paredes (IRCAM/Sorbonne Université): Stéphane Huot
  • Johann Wentzel (Univ. Waterloo, Canada): Géry Casiez
  • Sabrina Toofany (Univ. Rennes): Thomas Pietrzak
  • Axel Carayon (Univ. Toulouse III): Thomas Pietrzak
  • Intissar Chérif (Univ. Paris-Saclay): Thomas Pietrzak
  • Jeanne Hecquard (Univ. Rennes): Thomas Pietrzak
  • Brice Parilusyan (De Vinci Innovation Center): Thomas Pietrzak

10.3 Popularization

10.3.1 Articles and contents

  • “Boxe : comment mieux comprendre les combats pour aider les athlètes grâce à l'analyse vidéo” 33 (Article in The Conversation): Bruno Fruchard
  • The newsletter of the Limites Numériques project co-edited by Aurélien Tabard, has 1600 subscribers and about 2300 views per edition. The subscribers have very varied profiles: computer-scientists, social science scholars, journalists, regulators, students, science outreach professionals, developers, designers, digital project managers...
  • Aurélien Tabard wrote an article for the AMUE magazine edition "Urgence sur les sobriétés numériques" - The article is called "Les chemins de l'obsolescence : vivre avec des appareils vieillissants."

10.3.2 Education

  • Participation in "Fête de la Science" at the Inria center of the University of Lille - Bruno Fruchard, October 11th 2023

    Research projects on sport analytics were presented to mid-school and high-school students through 30-minutes presentations

  • Participation in a round table discussion at the RIC (Research, Innovation and Creation) day at Polytech Lille – Alice Loizeau, October 13th 2023
  • Participation in "Filles, maths et informatique : une équation lumineuse" ("Girls, math and computer sciences: a luminous equation") – Alice Loizeau, October 19th 2023

    This event was organized by the "Femmes et mathématiques" association, in partnership with Animath. Alice participated in the "speed-meetings", presenting her research and activity to high-shool girls. See here.

  • Organization of “Les Innovantes” – Alice Loizeau & Bruno Fruchard, December 5th 2023

    We invited highschoolers from the region Hauts-de-France to attend to presentations promoting women's contributions to computer science. 4 women presented their work on gender biases in computer science, human-computer interaction, virtual reality and haptics, and applied science for improved user experience.

  • Participation in "Doctorant.e-Lycéen.ne" ("PhD student - high school student") – Suliac Lavenant, Alice Loizeau, Raphaël Perraud, December 20th 2023

    This event was a workshop for the anniversary of the Inria centre of the University of Lille. Alice presented her research work to a high-school student, who then presented it for an official event summarizing significant works of the past 15 years.

  • Aurélien Tabard lectured master students doing a double diploma "Sociétés Numériques (SN)" between Centrale Lille and Sciences Po titled "Impacts environnementaux du numérique - Obsolescence des smartphones, de la technique aux usages"

10.3.3 Interventions

  • Participation in VivaTech 2023 – Bruno Fruchard, June 15-16th 2023

    Bruno presented at the conference a demonstration of the data processing tools developed with the French Climbing Federation as part of an Inria demonstration booth focused on science related to sport (VR simulations and data production and analysis).

  • Participation in "La Fête de la Science" – Bruno Fruchard and Timo Maszewski, October 6th-7th 2023

    The format was very similar to the one in VivaTech. In addition, we were interviewed by "l'Esprit Sorcier" to present the broad scope of the PerfAnalytics project (sequence available here).

  • Participation in an Xperium round table with former Olympic medalist Daouda Sow – Bruno Fruchard, October 12th 2023

    Bruno was invited to discuss with Daouda Sow during an event organized by Xperium (video available here). The goal was to present his work with the French boxing federation and allow the former Olympic athlete to share how it resonated with their experience.

  • Aurélien Tabard presented with Thomas Thibault results of the Limites Numériques project at Numérique en Commun (2000 attendees), and presented at Lille dev fest (1000 attendees), Bordeaux.io (1000 attendees), and Mobilis in mobile (200 attendees).

11 Scientific production

11.1 Major publications

11.2 Publications of the year

International journals

  • 11 articleM.Marie Bernardo, Y.Yannick Blandin, G.Géry Casiez and C.Cécile Scotto. Reliability of on-line visual feedback influences learning of continuous motor task of healthy young adults.Frontiers in Psychology14October 2023HALDOIback to text
  • 12 articleM.Marie Bernardo, Y.Yannick Blandin, G.Géry Casiez and C.Cécile Scotto. Use of variable online visual feedback to optimize sensorimotor coding and learning of a motor sequence.PLoS ONE1811November 2023, e0294138HALDOIback to text
  • 13 articleT.Thomas Pietrzak. « L’envers des mots » : Haptique.The ConversationOctober 2023HAL

International peer-reviewed conferences

Doctoral dissertations and habilitation theses

  • 28 thesisE.Eva Mackamul. Investigating the Influence of Visual Signifiers to Foster the Discovery of Touch-Based Interactions.Université de LilleDecember 2023HALback to text
  • 29 thesisS.Sylvain Malacria. Why interaction methods should be exposed and recognizable.Université de lilleOctober 2023HALback to text
  • 30 thesisG.Grégoire Richard. The role of haptic feedback in avatar-based interactions in virtual reality.Université de LilleJune 2023HALback to text
  • 31 thesisP.Philippe Schmid. Developing advanced command histories to improve digital editing processes: A Swiss army knife of command histories model and architecture.Centre Inria de l'Université de Lille; lilleJune 2023HALback to textback to text

Other scientific publications

11.3 Other

Scientific popularization

11.4 Cited publications