EN FR
EN FR
2022
Activity report
Project-Team
EX-SITU
RNSR: 201521246H
In partnership with:
CNRS, Université Paris-Saclay
Team name:
Extreme Situated Interaction
In collaboration with:
Laboratoire Interdisciplinaire des Sciences du Numérique
Domain
Perception, Cognition and Interaction
Theme
Interaction and visualization
Creation of the Project-Team: 2017 July 01

Keywords

Computer Science and Digital Science

  • A5.1. Human-Computer Interaction
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.7. Multimodal interfaces
  • A5.2. Data visualization
  • A5.6.2. Augmented reality

Other Research Topics and Application Domains

  • B2.8. Sports, performance, motor skills
  • B6.3.1. Web
  • B6.3.4. Social Networks
  • B9.2. Art
  • B9.2.1. Music, sound
  • B9.2.4. Theater
  • B9.5. Sciences

1 Team members, visitors, external collaborators

Research Scientists

  • Wendy Mackay [Team leader, INRIA, Senior Researcher, HDR]
  • Janin Koch [INRIA, Researcher]
  • Theophanis Tsandilas [INRIA, Researcher, HDR]

Faculty Members

  • Michel Beaudouin-Lafon [UNIV PARIS SACLAY, Professor, HDR]
  • Sarah Fdili Alaoui [UNIV PARIS SACLAY, Associate Professor]

Post-Doctoral Fellows

  • Jessalyn Alvina [Univ Paris-Saclay, until Mar 2022]
  • John Sullivan [Univ Paris Saclay, from Sep 2022]

PhD Students

  • Tove Bang [UNIV PARIS SACLAY]
  • Alexandre Battut [UNIV PARIS SACLAY]
  • Eya Ben Chaaben [INRIA, from Nov 2022]
  • Romane Dubus [INRIA, from Oct 2022]
  • Arthur Fages [UNIV PARIS SACLAY]
  • Camille Gobert [INRIA]
  • Han Han [UNIV PARIS SACLAY, until Mar 2022]
  • Capucine Nghiem [UNIV PARIS SACLAY]
  • Anna Offenwanger [CNRS]
  • Miguel Renom [Univ Paris-Saclay, until May 2022]
  • Wissal Sahel [Institut de recherche technologique System X]
  • Teo Sanchez [Inria, until Mar 2022]
  • Martin Tricaud [CNRS]
  • Elizabeth Walton [Univ Paris-Saclay, until Mar 2022]

Technical Staff

  • Alexandre Kabil [CNRS, Engineer, from Sep 2022]
  • Nicolas Taffin [INRIA, Engineer]
  • Junhang Yu [Univ Paris-Saclay, Engineer]

Interns and Apprentices

  • Raphaël Bournel [Univ Paris-Saclay, from May 2022 until Jul 2022]
  • Leo Cheddin [Univ Paris Saclay, from Mar 2022 until Jul 2022]
  • Bastien Destephen [Univ Paris-Saclay, until Feb 2022]
  • Romane Dubus [Inria, from Apr 2022 until Aug 2022]
  • Dylan Fluzin [Univ Paris Saclay, from Mar 2022 until Jul 2022]
  • Xun Gong [INRIA, from May 2022 until Oct 2022]
  • Shujian Guan [?, from Apr 2022 until Aug 2022]
  • Lea Paymal [Univ Paris Saclay]
  • Alexandre Pham [Univ Paris-Saclay, from May 2022 until Jul 2022]
  • Kevin Ratovo [Univ Paris-Saclay, from Apr 2022 until Sep 2022]
  • Stephanie Vo [Univ Paris-Saclay, from Apr 2022 until Sep 2022]

2 Overall objectives

Interactive devices are everywhere: we wear them on our wrists and belts; we consult them from purses and pockets; we read them on the sofa and on the metro; we rely on them to control cars and appliances; and soon we will interact with them on living room walls and billboards in the city. Over the past 30 years, we have witnessed tremendous advances in both hardware and networking technology, which have revolutionized all aspects of our lives, not only business and industry, but also health, education and entertainment. Yet the ways in which we interact with these technologies remains mired in the 1980s. The graphical user interface (GUI), revolutionary at the time, has been pushed far past its limits. Originally designed to help secretaries perform administrative tasks in a work setting, the GUI is now applied to every kind of device, for every kind of setting. While this may make sense for novice users, it forces expert users to use frustratingly inefficient and idiosyncratic tools that are neither powerful nor incrementally learnable.

ExSitu explores the limits of interaction — how extreme users interact with technology in extreme situations. Rather than beginning with novice users and adding complexity, we begin with expert users who already face extreme interaction requirements. We are particularly interested in creative professionals, artists and designers who rewrite the rules as they create new works, and scientists who seek to understand complex phenomena through creative exploration of large quantities of data. Studying these advanced users today will not only help us to anticipate the routine tasks of tomorrow, but to advance our understanding of interaction itself. We seek to create effective human-computer partnerships, in which expert users control their interaction with technology. Our goal is to advance our understanding of interaction as a phenomenon, with a corresponding paradigm shift in how we design, implement and use interactive systems. We have already made significant progress through our work on instrumental interaction and co-adaptive systems, and we hope to extend these into a foundation for the design of all interactive technology.

3 Research program

We characterize Extreme Situated Interaction as follows:

Extreme users. We study extreme users who make extreme demands on current technology. We know that human beings take advantage of the laws of physics to find creative new uses for physical objects. However, this level of adaptability is severely limited when manipulating digital objects. Even so, we find that creative professionals––artists, designers and scientists––often adapt interactive technology in novel and unexpected ways and find creative solutions. By studying these users, we hope to not only address the specific problems they face, but also to identify the underlying principles that will help us to reinvent virtual tools. We seek to shift the paradigm of interactive software, to establish the laws of interaction that significantly empower users and allow them to control their digital environment.

Extreme situations. We develop extreme environments that push the limits of today's technology. We take as given that future developments will solve “practical" problems such as cost, reliability and performance and concentrate our efforts on interaction in and with such environments. This has been a successful strategy in the past: Personal computers only became prevalent after the invention of the desktop graphical user interface. Smartphones and tablets only became commercially successful after Apple cracked the problem of a usable touch-based interface for the iPhone and the iPad. Although wearable technologies, such as watches and glasses, are finally beginning to take off, we do not believe that they will create the major disruptions already caused by personal computers, smartphones and tablets. Instead, we believe that future disruptive technologies will include fully interactive paper and large interactive displays.

Our extensive experience with the Digiscope WILD and WILDER platforms places us in a unique position to understand the principles of distributed interaction that extreme environments call for. We expect to integrate, at a fundamental level, the collaborative capabilities that such environments afford. Indeed almost all of our activities in both the digital and the physical world take place within a complex web of human relationships. Current systems only support, at best, passive sharing of information, e.g., through the distribution of independent copies. Our goal is to support active collaboration, in which multiple users are actively engaged in the lifecycle of digital artifacts.

Extreme design. We explore novel approaches to the design of interactive systems, with particular emphasis on extreme users in extreme environments. Our goal is to empower creative professionals, allowing them to act as both designers and developers throughout the design process. Extreme design affects every stage, from requirements definition, to early prototyping and design exploration, to implementation, to adaptation and appropriation by end users. We hope to push the limits of participatory design to actively support creativity at all stages of the design lifecycle. Extreme design does not stop with purely digital artifacts. The advent of digital fabrication tools and FabLabs has significantly lowered the cost of making physical objects interactive. Creative professionals now create hybrid interactive objects that can be tuned to the user's needs. Integrating the design of physical objects into the software design process raises new challenges, with new methods and skills to support this form of extreme prototyping.

Our overall approach is to identify a small number of specific projects, organized around four themes: Creativity, Augmentation, Collaboration and Infrastructure. Specific projects may address multiple themes, and different members of the group work together to advance these different topics.

4 Application domains

4.1 Creative industries

We work closely with creative professionals in the arts and in design, including music composers, musicians, and sound engineers; painters and illustrators; dancers and choreographers; theater groups; game designers; graphic and industrial designers; and architects.

4.2 Scientific research

We work with creative professionals in the sciences and engineering, including neuroscientists and doctors; programmers and statisticians; chemists and astrophysicists; and researchers in fluid mechanics.

5 Highlights of the year

  • Wendy Mackay appointed as the Annual Chair for Computer Science for the Collège de France (2021-2022).
  • Michel Beaudouin-Lafon received the CNRS Silver Medal for his research in Computer Science.
  • SustainML project has received funding from the European Union’s Horizon 2020 research and innovation programme (grant No 101070408), 7 partners, 4M€ funding; HCI coordinator: Janin Koch
  • Janin Koch and Wendy Mackay successfully ran the second annual creARTathon, a creative hackathon for 35 students in HCI, AI, Art and Design, with a final public exhibit at an art gallery in Paris.
  • Wendy Mackay co-organized a one-week Dagstuhl Seminar on Human-Centered Computing, with Michel Beaudouin-Lafon and Janin Koch attending.
  • Viktor Gustafsson (Ph.D.) and Robert Falcasantos (MA) created a new startup, REALSpawn with Inria Startup studio.
  • Michel Beaudouin-Lafon is co-directeur of PEPR eNSEMBLE, a 38M€ project on the future of digital collaboration funded by ANR/France 2030 and involving 80 research groups across France.

5.1 Awards

  • Téo Sanchez and Wendy Mackay: Best Paper Award at IUI 2022 for “Deep Learning Uncertainty in Machine Teaching.” 26
  • Han Han, Wendy Mackay, and Michel Beaudouin-Lafon: Honorable Mention Award at ACM CHI 2022 for “Passages: Interacting with Text Across Documents.” 22
  • Miguel Renom and Michel Beaudouin-Lafon: Honorable Mention Award at ACM CHI 2022 for “Exploring Technical Reasoning in Digital Tool Use.” 25
  • Sarah Fdili Alaoui: Honorable Mention Award at ACM CHI 2022 for “CO/DA: Live-Coding Movement-Sound Interactions for Dance Improvisation.” 19
  • Theophanis Tsandilas: Honorable Mention Award at ACM CHI 2022 for “Gesture Elicitation as a Computational Optimization Problem.” 28
  • Viktor Gustafsson received the 2022 iPhD Palmarès : concours d'innovation (Ph.D. thesis prize).
  • Romane Dubus received two Master's thesis prizes: « Prix Jeunes SEE Occitanie » (from SEE, the Société de l’Électricité, de l’Électronique et des Technologies de l’Information et de la Communication) and « Prix Pascal Brisset » (from ENAC, the Ecole Nationale de l'Aviation Civile).

6 New software and platforms

6.1 New software

6.1.1 Digiscape

  • Name:
    Digiscape
  • Keywords:
    2D, 3D, Node.js, Unity 3D, Video stream
  • Functional Description:
    Through the Digiscape application, the users can connect to a remote workspace and share files, video and audio streams with other users. Application running on complex visualization platforms can be easily launched and synchronized.
  • URL:
  • Contact:
    Olivier Gladin
  • Partners:
    Maison de la simulation, UVSQ, CEA, ENS Cachan, LIMSI, LRI - Laboratoire de Recherche en Informatique, CentraleSupélec, Telecom Paris

6.1.2 Touchstone2

  • Keyword:
    Experimental design
  • Functional Description:

    Touchstone2 is a graphical user interface to create and compare experimental designs. It is based on a visual language: Each experiment consists of nested bricks that represent the overall design, blocking levels, independent variables, and their levels. Parameters such as variable names, counterbalancing strategy and trial duration are specified in the bricks and used to compute the minimum number of participants for a balanced design, account for learning effects, and estimate session length. An experiment summary appears below each brick assembly, documenting the design. Manipulating bricks immediately generates a corresponding trial table that shows the distribution of experiment conditions across participants. Trial tables are faceted by participant. Using brushing and fish-eye views, users can easily compare among participants and among designs on one screen, and examine their trade-offs.

    Touchstone2 plots a power chart for each experiment in the workspace. Each power curve is a function of the number of participants, and thus increases monotonically. Dots on the curves denote numbers of participants for a balanced design. The pink area corresponds to a power less than the 0.8 criterion: the first dot above it indicates the minimum number of participants. To refine this estimate, users can choose among Cohen’s three conventional effect sizes, directly enter a numerical effect size, or use a calculator to enter mean values for each treatment of the dependent variable (often from a pilot study).

    Touchstone2 can export a design in a variety of formats, including JSON and XML for the trial table, and TSL, a language we have created to describe experimental designs. A command-line tool is provided to generate a trial table from a TSL description.

    Touchstone2 runs in any modern Web browser and is also available as a standalone tool. It is used at ExSitu for the design of our experiments, and by other Universities and research centers worldwide. It is available under an Open Source licence at https://touchstone2.org.

  • URL:
  • Contact:
    Wendy Mackay
  • Partner:
    University of Zurich

6.1.3 UnityCluster

  • Keywords:
    3D, Virtual reality, 3D interaction
  • Functional Description:

    UnityCluster is middleware to distribute any Unity 3D (https://unity3d.com/) application on a cluster of computers that run in interactive rooms, such as our WILD and WILDER rooms, or immersive CAVES (Computer-Augmented Virtual Environments). Users can interact the the application with various interaction resources.

    UnityCluster provides an easy solution for running existing Unity 3D applications on any display that requires a rendering cluster with several computers. UnityCluster is based on a master-slave architecture: The master computer runs the main application and the physical simulation as well as manages the input, the slave computers receive updates from the master and render small parts of the 3D scene. UnityCluster manages data distribution and synchronization among the computers to obtain a consistent image on the entire wall-sized display surface.

    UnityCluster can also deform the displayed images according to the user's position in order to match the viewing frustum defined by the user's head and the four corners of the screens. This respects the motion parallax of the 3D scene, giving users a better sense of depth.

    UnityCluster is composed of a set of C Sharp scripts that manage the network connection, data distribution, and the deformation of the viewing frustum. In order to distribute an existing application on the rendering cluster, all scripts must be embedded into a Unity package that is included in an existing Unity project.

  • Contact:
    Cédric Fleury
  • Partner:
    Inria

6.1.4 VideoClipper

  • Keyword:
    Video recording
  • Functional Description:

    VideoClipper is an IOS app for Apple Ipad, designed to guide the capture of video during a variety of prototyping activities, including video brainstorming, interviews, video prototyping and participatory design workshops. It relies heavily on Apple’s AVFoundation, a framework that provides essential services for working with time-based audiovisual media on iOS (https://developer.apple.com/av-foundation/). Key uses include: transforming still images (title cards) into video tracks, composing video and audio tracks in memory to create a preview of the resulting video project and saving video files into the default Photo Album outside the application.

    VideoClipper consists of four main screens: project list, project, capture and import. The project list screen shows a list with the most recent projects at the top and allows the user to quickly add, remove or clone (copy and paste) projects. The project screen includes a storyboard composed of storylines that can be added, cloned or deleted. Each storyline is composed of a single title card, followed by one or more video clips. Users can reorder storylines within the storyboard, and the elements within each storyline through direct manipulation. Users can preview the complete storyboard, including all titlecards and videos, by pressing the play button, or export it to the Ipad’s Photo Album by pressing the action button.

    VideoClipper offers multiple tools for editing titlecards and storylines. Tapping on the title card lets the user edit the foreground text, including font, size and color, change background color, add or edit text labels, including size, position, color, and add or edit images, both new pictures and existing ones. Users can also delete text labels and images with the trash button. Video clips are presented via a standard video player, with standard interaction. Users can tap on any clip in a storyline to: trim the clip with a non-destructive trimming tool, delete it with a trash button, open a capture screen by clicking on the camera icon, label the clip by clicking a colored label button, and display or hide the selected clip by toggling the eye icon.

    VideoClipper is currently in beta test, and is used by students in two HCI classes at the Université Paris-Saclay, researchers in ExSitu as well as external researchers who use it for both teaching and research work. A beta test version is available on demand under the Apple testflight online service.

  • Contact:
    Wendy Mackay

6.1.5 WildOS

  • Keywords:
    Human Computer Interaction, Wall displays
  • Functional Description:

    WildOS is middleware to support applications running in an interactive room featuring various interaction resources, such as our WILD and WILDER rooms: a tiled wall display, a motion tracking system, tablets and smartphones, etc. The conceptual model of WildOS is a platform, such as the WILD or WILDER room, described as a set of devices and on which one or more applications can be run.

    WildOS consists of a server running on a machine that has network access to all the machines involved in the platform, and a set of clients running on the various interaction resources, such as a display cluster or a tablet. Once WildOS is running, applications can be started and stopped and devices can be added to or removed from the platform.

    WildOS relies on Web technologies, most notably Javascript and node.js, as well as node-webkit and HTML5. This makes it inherently portable (it is currently tested on Mac OS X and Linux). While applications can be developed only with these Web technologies, it is also possible to bridge to existing applications developed in other environments if they provide sufficient access for remote control. Sample applications include a web browser, an image viewer, a window manager, and the BrainTwister application developed in collaboration with neuroanatomists at NeuroSpin.

    WildOS is used for several research projects at ExSitu and by other partners of the Digiscope project. It was also deployed on several of Google's interactive rooms in Mountain View, Dublin and Paris. It is available under an Open Source licence at https://bitbucket.org/mblinsitu/wildos.

  • URL:
  • Contact:
    Michel Beaudouin-Lafon

6.1.6 StructGraphics

  • Keywords:
    Data visualization, Human Computer Interaction
  • Scientific Description:
    Information visualization research has developed powerful systems that enable users to author custom data visualizations without textual programming. These systems can support graphics-driven practices by bridging lazy data-binding mechanisms with vector-graphics editing tools. Yet, despite their expressive power, visualization authoring systems often assume that users want to generate visual representations that they already have in mind rather than explore designs. They also impose a data-to-graphics workflow, where binding data dimensions to graphical properties is a necessary step for generating visualization layouts. In this work, we introduce StructGraphics, an approach for creating data-agnostic and fully reusable visualization designs. StructGraphics enables designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. In StructGraphics, tabular data structures are derived directly from the structure of the graphics. Later, designers can link these structures with real datasets through a spreadsheet user interface. StructGraphics supports the design and reuse of complex data visualizations by combining graphical property sharing, by-example design specification, and persistent layout constraints. We demonstrate the power of the approach through a gallery of visualization examples and reflect on its strengths and limitations in interaction with graphic designers and data visualization experts.
  • Functional Description:
    StructGraphics is a user interface for creating data-agnostic and fully reusable designs of data visualizations. It enables visualization designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. Overall, StructGraphics follows the inverse workflow than traditional visualization-design systems. Rather than transforming data dependencies into visualization constraints, it allows users to interactively define the property and layout constraints of their visualization designs and then translate these graphical constraints into alternative data structures. Since visualization designs are data-agnostic, they can be easily reused and combined with different datasets.
  • URL:
  • Publication:
  • Contact:
    Theofanis Tsantilas
  • Participant:
    Theofanis Tsantilas

6.2 New platforms

6.2.1 WILD

Participants: Michel Beaudouin-Lafon [correspondant], Cédric Fleury, Olivier Gladin.

WILD is our first experimental ultra-high-resolution interactive environment, created in 2009. In 2019-2020 it received a major upgrade: the 16-computer cluster was replaced by new machines with top-of-the-line graphics cards, and the 32-screen display was replaced by 32 32" 8K displays resulting in a resolution of 1 giga-pixels (61 440 x 17 280) for an overall size of 5m80 x 1m70 (280ppi). An infrared frame adds multitouch capability to the entire display area. The platform also features a camera-based motion tracking system that lets users interact with the wall, as well as the surrounding space, with various mobile devices.

6.2.2 WILDER

Participants: Michel Beaudouin-Lafon [correspondant], Cédric Fleury, Olivier Gladin.

WILDER (Figure 1) is our second experimental ultra-high-resolution interactive environment, which follows the WILD platform developed in 2009. It features a wall-sized display with seventy-five 20" LCD screens, i.e. a 5m50 x 1m80 (18' x 6') wall displaying 14 400 x 4 800 = 69 million pixels, powered by a 10-computer cluster and two front-end computers. The platform also features a camera-based motion tracking system that lets users interact with the wall, as well as the surrounding space, with various mobile devices. The display uses a multitouch frame (one of the largest of its kind in the world) to make the entire wall touch sensitive.

WILDER was inaugurated in June, 2015. It is one of the ten platforms of the Digiscope Equipment of Excellence and, in combination with WILD and the other Digiscope rooms, provides a unique experimental environment for collaborative interaction.

In addition to using WILD and WILDER for our research, we have also developed software architectures and toolkits, such as WildOS and Unity Cluster, that enable developers to run applications on these multi-device, cluster-based systems.

Figure 1.a
Figure 1.b
Image of the WILDER platform.
Figure 1: The WILDER platform.

7 New results

7.1 Fundamentals of Interaction

Participants: Michel Beaudouin-Lafon [correspondant], Wendy Mackay, Theophanis Tsandilas, Camille Gobert, Han Han, Miguel Renom, Martin Tricaud.

In order to better understand fundamental aspects of interaction, ExSitu conducts in-depth observational studies and controlled experiments which contribute to theories and frameworks that unify our findings and help us generate new, advanced interaction techniques 1. Our previous work on Bayesian Information Gain 5 has led to a book chapter 34 in a book on Bayesian methods for interaction design. Our theoretical work also leads us to deepen or re-analyze existing theories and methodologies in order to gain new insights.

One theory that has focused our attention is Technical Reasoning. The Technical Reasoning hypothesis  45 in cognitive neuroscience posits that humans engage in physical tool use by reasoning about mechanical interactions among objects. By modeling the use of objects as tools based on their abstract properties, this theory explains how (physical) tools can be re-purposed beyond their assigned function. Does this theory applies to digital tool use? To follow-up on our previous work 6, we conducted an experiment that forced participants to re-purpose commands to complete a text layout task 25. The results suggest that most participants engaged in Technical Reasoning to re-purpose digital tools, altough some experienced “functional fixedness”. By introducing Technical Reasoning to HCI, this work contributes a powerful theoretical model for the design of digital tools. Miguel Renom's successfully defended his thesis, titled “Theoretical bases of human tool use in digital environments” 36 on this topic.

Technical Reasoning supports the model of instrumental interaction  41 and its associated design principles  42, which we have developed for many years, as well as our more recent work on substrates. This theoritical work is grounding our design of new interactions with documents, following our generative theory approach  1.

First, we extended our work on i-LaTeX  43 with a user study to assess its performance. Document description languages such as LaTeX are used extensively to author scientific and technical documents, but editing them is cumbersome: code-based editors only provide generic features, while WYSIWYG interfaces only support a subset of the language. Based on interviews with 11 LaTeX users, we introduce Transitional Representations for document description languages, which enable the visualisation and manipulation of fragments of code in relation to their generated output. i-LaTeX is a LaTeX editor equipped with Transitional Representations of formulae, tables, images, and grid layouts. We ran a 16-participant experiment 20 that showed that Transitional Representations let them complete common editing tasks significantly faster, with fewer compilations, and with a lower workload.

Second, we have continued our work on the management and authoring of documents by knowledge workers. A key aspect of knowledge work is the analysis and manipulation of sets of related documents. We conducted interviews with patent examiners and scientists and found that all face difficulties using specialized tools for managing text from multiple documents across interconnected activities, including searching, collecting, annotating, organizing, writing and reviewing, while manually tracking their provenance. Based on these interviews we created Passages 22, interactive objects that reify text selections and can then be manipulated, reused, and shared across multiple tools. Passages directly supports the list of activities above as well as fluid transitions among them. Two user studies showed that participants found Passages both elegant and powerful, facilitating their work practices and enabling greater reuse and novel strategies for analyzing and composing documents. Han Han successfully defended his Ph.D. thesis 35, titled “Designing Representations for Digital Documents”, on Passages and other tools for document management.

We have also continued our long-standing stream of research on fundamental aspects of interaction with new work on gesture elicitation, on gesture learning and on pointing. A fundamental problem of interaction design is how to effectively map computer actions to user input, or gestures. Gesture elicitation studies are commonly used for this purpose 9. However, deriving concrete gesture vocabularies from the gesture elicitation data remains largely based on heuristics and ad hoc methods. We formalized the problem as a computational optimization problem 28. We showed how to define it as an optimal assignment problem and discussed how to express objective functions and custom design constraints through integer programs. In addition, we introduced a set of tools for assessing the uncertainty of optimization outcomes due to random sampling, and for supporting researchers’ decisions on when to stop collecting data from a gesture elicitation study. We provided extensive supplementary material that can be used by researchers to replicate, verify, and extend our results ().

With the increasing interest in movement sonification and expressive gesture-based interaction, it is important to understand which factors contribute to movement learning and how. We explored the effects of movement sonification and users' musical background on motor variability in complex gesture learning 14. We conducted an empirical study in which musicians and non-musicians learn two gesture sequences over three days, with and without movement sonification. Results show the interlaced interaction effects of these factors and how they unfold in the 3-day learning process. In particular, we found that the participants' musical background significantly affected their performance.

We also proposed a metric learning method that bridges the gap between human ratings of movement similarity in a motor learning task and computational metric evaluation on the same task 15. We apply metric learning on a Dynamic Time Warping algorithm to derive an optimal set of movement features that best explain human ratings. We evaluated this method on an existing movement dataset, which comprises videos of participants practising a complex gesture sequence toward a target template, as well as the collected data that describes the movements. We show that it is possible to establish a linear relationship between human ratings and our learned computational metric. This learned metric can be used to describe the most salient temporal moments implicitly used by annotators, as well as movement parameters that correlate with motor improvements in the dataset.

Finally, we introduced, in collaboration with colleagues at Stony Brook, a pointing techniques that uses reinforcement learning to improve pointing on very small devices such as swatches 24. This technique is based on suggesting multiple target candidates when there is uncertainty about which target is designated by the touch. In order to reduce the number of suggestions yet keep error rates low, we introduce SATS, a Suggestion-based Accurate Target Selection method, where target selection is formulated as a sequential decision problem. The objective is to maximize the utility: the negative time cost for the entire target selection procedure. The SATS decision process is dictated by a policy generated using reinforcement learning. It automatically decides when to provide suggestions and when to directly select the target. Our user studies show that SATS reduced error rate and selection time over Shift, a magnification-based method, and MUCS, a suggestion-based alternative that optimizes the utility for the current selection. SATS also significantly reduced error rate over BayesianCommand, which directly selects targets based on posteriors, with only a minor increase in selection time.

7.2 Human-Computer Partnerships

Participants: Wendy Mackay [co-correspondant], Janin Koch [co-correspondant], Téo Sanchez, Nicolas Taffin, Theophanis Tsandilas.

ExSitu is interested in designing effective human-computer partnerships where expert users control their interaction with intelligent systems. Rather than treating the human users as the `input' to a computer algorithm, we explore human-centered machine learning, where the goal is to use machine learning and other techniques to increase human capabilities. Much of human-computer interaction research focuses on measuring and improving productivity: our specific goal is to create what we call `co-adaptive systems' that are discoverable, appropriable and expressive for the user.

In collaboration with data-management experts (Université Paris Cité) and visualization experts (ILDA team), we continued our research on progressive visual analytics for large data series collections 12. The database community has optimized similarity data series search by using sophisticated index structures. Still, in large datasets, answering a single similarity search query can take significant time, which is prohibitive for many real-world scenarios that involve visual analysis tools. In this work, we support exploration and decision making by providing progressive results, that is, very quick approximate answers that improve over time and eventually converge to the final exact answer. in addition, we provide users with probabilistic guarantees that help them assess the quality of intermediate answers and predict the time needed to expect the exact answer. Based on these guarantees, we then build criteria for stopping the search much earlier, e.g., when the expected distance error drops below a threshold level, or when the probability that exact answer is already found is sufficiently high. We have extended our earlier work to k-NN similarity search and k-NN classification, both for the Euclidean distance and Dynamic Time Warping (DTW). We conduced a large number of experiments to evaluate our approach, using a variety of both synthetic and real datasets up to 100GB in size.

We explored designing and evaluating human-robot interaction from an HCI perspectives. We demonstrated how auto-biographical design process can lead to the fabrication of a soft robotic wearable for lower limb movement guidance for dance practice 18. Our work highlights the use of first-person perspective methods to design human-robot interactions. We shared our results with four dancers and our experiments illustrate how the wearable both constrains and inspires the dancers towards new ways of performing, challenging them to rethink their movements. The described design inquiry contributes with reflections on soft robotics that uncover the challenges and prospects designers and researchers in Human-Computer Interaction face when designing, prototyping and experimenting with such technologies for embodied interactions.

We also looked into how disproportional gender representation in research participants in human-robot interactions is and what impact the gender representation among researchers has on such decisions 16. We produced a dataset covering participant gender representation in all 684 full papers published at the HRI conference from 2006-2021 to identify current trends in HRI research participation. We found an over-representation of men in research participants to date, as well as inconsistent and/or incomplete gender reporting which typically engages in a binary treatment of gender at odds with published best practice guidelines. We complemented this with a survey of HRI researchers to examine correlations between the who is doing with the who is taking part, and present some evidence for a link between researcher identity and participant diversity.

Figure 2

Shows an ambiguous image of a fuzzy, side-ways "6" character, which leads to high aleatoric uncertainty, but low epistemic uncertainty.Below, it shows a novel image of a shirt, which leads to high epistemic uncertainty, but low aleatoric uncertainty

Figure 2: Illustration of aleatoric and epistemic uncertainties through the Deep Ensemble approach, using as input data an ambiguous image with respect to the training set (made of handwritten digits recognition problem (MNIST)) and a novel image (unrelated to the training set).

We continued our investigation into human-centered machine learning teaching and explainability in the context of human-computer partnerships. Regarding ML teaching, we transferred the practice of estimating ML uncertainty for model evaluation done by professionals to help non-experts to better understand ML models 26. In particular, we explored how the two types of uncertainty-aleatoric and epistemic can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting. We conducted an experiment where non-experts train a classifier to recognize card images, and are tested on their ability to predict classifier outcomes (see Fig. 2). Our results showed that participants who used either larger or more varied training sets significantly improved their understanding of uncertainty, both epistemic or aleatoric.

We looked at differences between the current understanding of human-centered explainability in HCI and explainability AI in AI 23 and discussed current ideas in both fields, and showed an example of combining both by redefining the way systems are designed based on preliminary work on algebraic machine learning. At the AAAI Fall Symposium 31, we also challenged the AI perspective on Kaheman's Thinking Fast and Slow (Kahneman, 2011), which provides a simple mental model of how human intelligence builds on components with complementing responsibilities and capabilities. In computer science in general, and artificial intelligence research in particular, these ideas are used to inspire new methods and architectures. We argued that many of those methods use e.g. Thinking Fast and Slow as a token reference, while not living up to the definitions of dual-process systems from psychology. In this context, ‘fast’ is not synonymous with neural, fast can also mean, e.g., fixed social interactions for social AI agents. We argued that these ideas are misused in saying that humans are flawed and AI systems can fix that. Human bias is highly context-dependent, and simplistic applications of dual-process theory to AI are likely to fail. Thus, the narrative that AI systems will provide users with rationality is flawed. We surveyed and categorized the (mis)use of prospect theory and other dual-process theories with the goal of guiding AI research towards more human-centric artificial intelligence.

Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill workers, who lose valuable skills that would otherwise be maintained as part of their daily work. Such deskilling has a wide range of negative effects on multiple stakeholders –– employees, organizations, and society at large. This essay discusses deskilling in the age of AI on three levels - individual, organizational and societal. Deskilling is furthermore analyzed through the lens of four different levels of human-AI configurations and we argue that one of them, Hybrid Intelligence, could be particularly suitable to help manage the risk of deskilling human experts. Hybrid Intelligence system design and implementation can explicitly take such risks into account and instead foster upskilling of workers. Hybrid Intelligence may thus, in the long run, lower costs and improve performance and job satisfaction, as well as prevent management from creating unintended organization-wide deskilling. 46

Finally, we co-organized a week-long Dagstuhl Seminar  44 on Human-Centered Artificial Intelligence that brought together 22 participants with a diverse background from AI, Robotics, HCI, Ubiquitous Computing, Business and Sociology, from across Europe and North America to lay the groundwork for a manifesto on Hybrid Human-centered AI systems.

7.3 Creativity

Participants: Sarah Fdili Alaoui, Wendy Mackay [correspondant], Tove Grimstad, Manon Vialle, Liz Walton, Janin Koch, Nicolas Taffin.

ExSitu is interested in understanding the work practices of creative professionals who push the limits of interactive technology. We follow a multi-disciplinary participatory design approach, working with both expert and non-expert users in diverse creative contexts.

Caramiaux et al. 11 explore the shift in narrative from artworks created with AI to artworks created by AI. We interviewed internationally renowned artists who use AI in their artwork about their relationship with AI and how they craft it. The paper highlights the role of human labor in in their art and demonstrates how artists develop specific skills aimed at crafting AI.

We were also interested in developing dance support technologies that support dancers over time, as their careers and personal practices evolve. Walton and Mackay 30 interviewed 12 professional dancers about a critical moment in their careers: their transition to a new dance style due to shifting interests, aging or injury. We identified three key challenges—overcoming habits, learning new forms of movement, transitioning over time—and their strategies for addressing them and argued that successful tools must help dancers change their mentality about new movement styles, rather than focusing solely on movement mechanics. We suggested three possible implications for design: develop “movement substrates” that handle multiple movement representations; integrate learning and reflection in a single session; and create movement definitions through movement. Elizabeth Walton summarized this work and other research, including specific guidelines for designing dance support systems that support the long-term evolution of a dancer's work practice. 37.

We also designed a variety of technologies that support dancers’ practice. We designed the Wearable Choreographer 18 as a soft robotic wearable that guides lower limb movement. We illustrated how it both constrains and inspires the dancers towards new ways of performing, and challenges them to rethink their movements. We also explored how to provide embodied guidance for swing dancers.

We also explored an augmented reality visualization to support learning of Isadora Duncan movement qualities 29. We presented an abstract representation of choreographic motion that conveys the movement quality of fluidity, central to Isadora Duncan's style of modern dance. The model was designed in collaboration with an expert Duncanian dancer, using five flexible ribbons joined at the solar plexus and animated via motion capture data using a tailored optimization-based algorithm. The model is displayed in a Hololens headset, which lets the dancer visualize and manipulate the ribben to understand and learn Duncan's choreographic style. We explored how the system offers professional dancers an immersive experience previously not possible with traditional human-like or skeleton-based representations.

We developed a live coding platform to support a joint improvisation between dance movement and interactive sonification 19. These explorations allow us to assess how a variety of technologies can integrate different creative contexts ranging from improvisation to learning. The goal is to design technologies that can be appropriated by practitioners to adapt them to their personal practices and creative processes. This paper won an honorable mention at CHI'22.

Finally, we co-organized two workshops at the CHI'22 conference. The first one, InContext: Futuring User-Experience Design Tools 38, explored how to create digital tools for user experience and interaction design, with the goal of enabling designers to create appropriate, enjoyable and functional human-computer experiences. Workshop participants brainstormed new forms of design tools that encourage best practice, for example, linking representations, analysis tools, just-in-time evidence, physicality, experience, and crucially, put context at the center of design.

The second workshop, The State of the (CHI)Art 33, explored the current state of art in HCI, computer science and other related fields, shifting boundaries as to what ”art” is in these spaces. By bringing together like-minded and creative individuals, the goal of the workshop was to both inspire and legitimize our diverse practices, present viewpoints, create meaningful outputs, host discussions, and work toward the future of this plurality.

7.4 Collaboration

Participants: Sarah Fdili Alaoui, Michel Beaudouin-Lafon [co-correspondant], Wendy Mackay [co-correspondant], Arthur Fages, Janin Koch, Theophanis Tsandilas.

ExSitu explores new ways of supporting collaborative interaction and remote communication.

We studied collaboration between augmented-reality (AR) users and remote desktop users 13. Such asymmetrical collaboration configurations become common today for many design tasks, due to the geographical distance of people or unusual circumstances such as a lockdown. We conducted a first study to investigate trade-offs of three remote representations of an AR workspace: a fully virtual representation, a first-person view, and an external view. Building on our findings, we designed a multi-view video-mediated communication system (see Fig. 3) that combines these representations through interactive tools for navigation, previewing, pointing, and annotation (). We reported on a second user study that observed how 12 participants used the system to provide remote instructions for an AR furniture arrangement task. Participants extensively used its view transition tools, while the system reduced their reliance on verbal instructions. A live demonstration of our system was presented at IHM'22 32 and was featured by the Belgian TV channel RTBF.

Figure 3

Local and remote views in a multi-view collaboration environment. The first picture presents the augmented view perceived by the local wearing an augmented reality headset. The view shows the virtual model of a house and miniature furniture, which is physically manipulated with the hands of the user. The other three pictures illustrate the views of the remote collaborator available via a desktop interface: a fully virtual view, a first-person view streamed from the headset, and an external view streamed from a depth camera

Figure 3: A user wearing an AR headset and a remote desktop collaborator perform a physical furniture arrangement task around a virtual 3D house model. (a) AR user's view displayed in the headset and three alternative views of the remote collaborators available in our user interface: (b) a fully virtual view, (c) a first-person view streamed from the headset, and (d) an external view streamed from a depth camera.

We also worked with game masters (GMs), creative practitioners who plan and orchestrate tabletop role-playing games 27. We interviewed eight expert game masters to discover how they adapt everyday technologies and materials as creativity support tools (CSTs) for improvisational and collaborative play. We integrated theories of improvisational and distributed creativity with the human-artifact model, which provides an activity-theoretical vocabulary for analyzing the mediating relationships between specialist practitioners and their tools. We show how GMs prepare and deploy readymade artifacts: analog and digital CSTs that flexibly mediate recurring creative tasks in their practice, such as improvising narrative elements, facilitating smooth play, and creating aesthetic effects. We find that GMs demonstrate designerly thinking as they create, share, and refine repertoires of readymade artifacts. We argue that our theoretical approach can inform future studies of IT-mediated creativity, and that readymade artifacts can be an analytical and generative concept for the design of novel creativity support tools.

Finally, we explored collaboration in the context of Massively Multiplayer Online games (MMOs) where players consume content much faster than game designers can produce it. However, they also generate stories through their interaction, which can contribute to adding novel types of content in the game world. We introduced and demonstrated Play Arcs 21, a design strategy for structuring emergent stories that players can co-design and contribute as unique game content. We also developed an MMO with tools for codesign and ‘history game mechanics’ and test as a technology probe with 49 players. We showed that Play Arcs can successfully structure coherent stories and support players in shaping new, unique content based on their own histories. We found that these stories can inform and guide players’ decisions, and also that, while players often share simpler stories directly, they keep more notable stories to themselves for retelling later. This work, part of Viktor Gustaffson's Ph.D. thesis, led to a Startup at Inria Startup Studio, called RealSpawn, which is currently under contract negotiation with another video game company.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Participants: Wendy Mackay, Wissal Sahel, Robert Falcasantos.

CAB: Cockpit and Bidirectional Assistant
  • Title:
    Smart Cockpit Project
  • Duration:
    Sept 2020 - August 2024
  • Coordinator:
    SystemX Technological Research Institute
  • Partners:
    • SystemX
    • EDF
    • Dassault
    • RATP
    • Orange
    • Inria
  • Inria contact:
    Wendy Mackay
  • Summary:
    The goal of the CAB Smart Cockpit project is to define and evaluate an intelligent cockpit that integrates a bi-directional virtual agent that increases the real-time the capacities of operators facing complex and/or atypical situations. The project seeks to develop a new foundation for sharing agency between human users and intelligent systems: to empower rather than deskill users by letting them learn throughout the process, and to let users maintain control, even as their goals and circumstances change.

9 Partnerships and cooperations

9.1 International initiatives

Participants: Theophanis Tsandilas, Capucine Nghiem.

9.1.1 Participation in other International Programs

GRAVIDES
  • Title:
    Grammars for Visualization-Design Sketching
  • Funding:
    CNRS - University of Toronto Collaboration Program
  • Duration:
    2021 - 2023
  • Coordinator:
    Theophanis Tsandilas and Fanny Chevalier
  • Partners:
    • CNRS – LISN
    • University of Toronto – Dept. of Computer Science
  • Inria contact:
    Theophanis Tsandilas
  • Summary:
    The goal of the project is to create novel visualization authoring tools that enable common users with no design expertise to sketch and visually express their personal data.

9.2 International research visitors

9.2.1 Visits of international scientists

Professors Narges Mahyar and Ali Sarvghad from the University of Massachusetts, Amherst visited the lab for four days in December, 2022 and gave talks about their research. Professor James Hollan, University of California, San Diego, spoke on 1 March and visited the lab on 30 March, 2022. Professor Stephane Conversy, ENAC ( Ecole Nationale de l'Aviation Civile) gave a talk on 15 March, 2022. Professor Laurence Nigay, University of Grenoble, gave a talk and visited the on 29 March, 2022. Professor Géry Casiez, University of Lille, gave a talk on 22 March, 2022. Professor Yvonne Rogers, University College London, U.K., gave a talk and visited the lab on 12 April, 2022. Professor Albrecht Schmidt, University of Munich, gave a talk on 19 April, 2022. Dr. Thomas Baudel, IBM research, gave a talk and visited the lab on 20 April, 2022. Professor Nicolai Marquart, University College London, gave a talk and visited the lab on 19 May, 2022.

Inria International Chair

Joanna McGrenera, Professor, University of British Columbia, Canada, visited the lab in June and July 2022, as part of her final year as an Inria Chair. She collaborated on several research papers, and served as a member of the jury for the 2022 crearathon.

9.3 European initiatives

9.3.1 Horizon Europe

SustainML
  • Title:
    Application Aware, Life-Cycle Oriented Model-Hardware Co-Design Framework for Sustainable, Energy Efficient ML Systems
  • Duration:
    From October 1, 2022 to September 30, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
    • IBM RESEARCH GMBH (IBM), Switzerland
    • SAS UPMEM, France
    • KOBENHAVNS UNIVERSITET (UCPH), Denmark
    • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
    • TECHNISCHE UNIVERSITAT KAISERSLAUTERN, Germany
  • Inria contact:
    Janin Koch
  • Coordinator:
    PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
  • Summary:
    AI is increasingly becoming a significant factor in the CO2 footprint of the European economy. To avoid a conflict between sustainability and economic competitiveness and to allow the European economy to leverage AI for its leadership in a climate friendly way, new technologies to reduce the energy requirements of all parts of AI system are needed. A key problem is the fact that tools (e.g. PyTorch) and methods that currently drive the rapid spread and democratization of AI prioritize performance and functionality while paying little attention to the CO2 footprint. As a consequence, we see rapid growth in AI applications, but not much so in AI applications that are optimized for low power and sustainability. To change that we aim to develop an interactive design framework and associated models, methods and tools that will foster energy efficiency throughout the whole life-cycle of ML applications: from the design and exploration phase that includes exploratory iterations of training, testing and optimizing different system versions through the final training of the production systems (which often involves huge amounts of data, computation and epochs) and (where appropriate) continuous online re-training during deployment for the inference process. The framework will optimize the ML solutions based on the application tasks, across levels from hardware to model architecture. AI developers from all experience levels will be able to make use of the framework through its emphasis on human-centric interactive transparent design and functional knowledge cores, instead of the common blackbox and fully automated optimization approaches in AutoML. The framework will be made available on the AI4EU platform and disseminated through close collaboration with initiatives such as the ICT 48 networks. It will also be directly exploited by the industrial partners representing various parts of the relevant value chain: from software framework, through hardware to AI services.

9.3.2 H2020 projects

HumanE AI
  • Title:
    Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us
  • Duration:
    Sept 2020 - August 2024
  • Coordinator:
    DFKI
  • Partners:
    • Aalto Korkeakoulusaatio SR (Finland)
    • Agencia Estatal Consejo Superior Deinvestigaciones Cientificas (Spain)
    • Albert-ludwigs-universitaet Freiburg (Germany)
    • Athina-erevnitiko Kentro Kainotomias Stis Technologies Tis Pliroforias, Ton Epikoinonion Kai Tis Gnosis (Greece)
    • Consiglio Nazionale Delle Ricerche (Italy)
    • Deutsches Forschungszentrum Fur Kunstliche Intelligenz GMBH (Germany)
    • Eidgenoessische Technische Hochschule Zuerich (Switzerland)
    • Fondazione Bruno Kessler (Italy)
    • German Entrepreneurship GMBH (Germany)
    • INESC TEC - Instituto De Engenhariade Sistemas E Computadores, Tecnologia E Ciencia (Portugal)
    • ING GROEP NV (Netherlands)
    • Institut Jozef Stefan (Slovenia)
    • Institut Polytechnique De Grenoble (France)
    • Knowledge 4 All Foundation LBG (UK)
    • Kobenhavns Universitet (Denmark)
    • Kozep-europai Egyetem (Hungary)
    • Ludwig-maximilians-universitaet Muenchen (Germany)
    • Max-planck-gesellschaft Zur Forderung Der Wissenschaften EV (Germany)
    • Technische Universitaet Wien (Austria)
    • Technische Universitat Berlin (Germany)
    • Technische Universiteit Delft (Netherlands)
    • Thales SIX GTS France SAS (France)
    • The University Of Sussex (UK)
    • Universidad Pompeu Fabra (Spain)
    • Universita di Pisa (Italy)
    • Universiteit Leiden (Netherlands)
    • University College Cork - National University of Ireland, Cork (Ireland)
    • Uniwersytet Warszawski (Poland)
    • Volkswagen AG (Germany)
  • Inria contact:
    Wendy Mackay and Janin Koch
  • Coordinator:
    DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
  • Summary:
    The goal of the HumanE AI project is to create artificial intelligence technologies that synergistically work with humans, fitting seamlessly into our complex social settings and dynamically adapting to changes in our environment. Such technologies will empower humans with AI, allowing humans and human society to reach new potentials and more effectively to deal with the complexity of a networked globalized world.
ALMA
  • Title:
    ALMA: Human Centric Algebraic Machine Learning
  • Duration:
    From September 1, 2020 to August 31, 2024
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • TEKNOLOGIAN TUTKIMUSKESKUS VTT OY (VTT), Finland
    • PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
    • ALGEBRAIC AI SL, Spain
    • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
    • TECHNISCHE UNIVERSITAT KAISERSLAUTERN, Germany
    • FIWARE FOUNDATION EV (FIWARE), Germany
    • UNIVERSIDAD CARLOS III DE MADRID (UC3M), Spain
    • FUNDACAO D. ANNA DE SOMMER CHAMPALIMAUD E DR. CARLOS MONTEZ CHAMPALIMAUD (FUNDACAO CHAMPALIMAUD), Portugal
  • Inria contact:
    Wendy Mackay
  • Coordinator:
    PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
  • Summary:

    Algebraic Machine Learning (AML) has recently been proposed as new learning paradigm that builds upon Abstract Algebra, Model Theory. Unlike other popular learning algorithms, AML is not a statistical method, but it produces generalizing models from semantic embeddings of data into discrete algebraic structures, with the following properties:

    P1: Is far less sensitive to the statistical characteristics of the training data and does not fit (or even use) parameters

    P2: Has the potential to seamlessly integrate unstructured and complex information contained in training data, with a formal representation of human knowledge and requirements;

    P3. Uses internal representations based on discrete sets and graphs, offering a good starting point for generating human understandable, descriptions of what, why and how has been learned

    P4. Can be implemented in a distributed way that avoids centralized, privacy-invasive collections of large data sets in favor of a collaboration of many local learners at the level of learned partial representations.

    The aim of the project is to leverage the above properties of AML for a new generation of Interactive, Human-Centric Machine Learning systems., that will:

    - Reduce bias and prevent discrimination by reducing dependence on statistical properties of training data (P1), integrating human knowledge with constraints (P2), and exploring the how and why of the learning process (P3)

    - Facilitate trust and reliability by respecting ‘hard’ human-defined constraints in the learning process (P2) and enhancing explainability of the learning process (P3)

    - Integrate complex ethical constraints into Human-AI systems by going beyond basic bias and discrimination prevention (P2) to interactively shaping the ethics related to the learning process between humans and the AI system (P3)

    - Facilitate a new distributed, incremental collaborative learning method by going beyond the dominant off-line and centralized data processing approach (P4)

ONE
  • Title:
    ONE: Unified Principles of Interaction
  • Funding:
    European Research Council (ERC Advanced Grant)
  • Duration:
    October 2016 - March 2023
  • Coordinator:
    Michel Beaudouin-Lafon
  • Summary:
    The goal of ONE is to fundamentally re-think the basic principles and conceptual model of interactive systems to empower users by letting them appropriate their digital environment. The project addresses this challenge through three interleaved strands: empirical studies to better understand interaction in both the physical and digital worlds, theoretical work to create a conceptual model of interaction and interactive systems, and prototype development to test these principles and concepts in the lab and in the field. Drawing inspiration from physics, biology and psychology, the conceptual model combines substrates to manage digital information at various levels of abstraction and representation, instruments to manipulate substrates, and environments to organize substrates and instruments into digital workspaces.

9.4 National initiatives

eNSEMBLE
  • Title:
    Future of Digital Collaboration
  • Type:
    PEPR Exploratoire
  • Duration:
    2022 – 2030
  • Coordinator:
    Gilles Bailly, Michel Beaudouin-Lafon, Stéphane Huot, Laurence Nigay
  • Partners:
    • Centre National de la Recherche Scientifique (CNRS)
    • Institut National de Recherche en Informatique et Automatique (Inria)
    • Université Grenoble Alpes
    • Université Paris-Saclay
  • Budget:
    38.25 Meuros public funding from ANR / France 2030
  • Summary:

    The purpose of eNSEMBLE is to fundamentally redefine digital tools for collaboration. Whether it is to reduce our travel, to better mesh the territory and society, or to face the forthcoming problems and transformations of the next decades, the challenges of the 21st century will require us to collaborate at an unprecedented speed and scale.

    To address this challenge, a paradigm shift in the design of collaborative systems is needed, comparable to the one that saw the advent of personal computing. To achieve this goal, we need to invent mixed (i.e. physical and digital) collaboration spaces that do not simply replicate the physical world in virtual environments, enabling co-located and/or geographically distributed teams to work together smoothly and efficiently.

    Beyond this technological challenge, the eNSEMBLE project also addresses sovereignty and societal challenges: by creating the conditions for interoperability between communication and sharing services in order to open up the "private walled gardens" that currently require all participants to use the same services, we will enable new players to offer solutions adapted to the needs and contexts of use. Users will thus be able to choose combinations of potentially "intelligent" tools and services for defining mixed collaboration spaces that meet their needs without compromising their ability to exchange with the rest of the world. By making these services more accessible to a wider population, we will also help reduce the digital divide.

    These challenges require a major long-term investment in multidisciplinary work (Computer Science, Ergonomics, Cognitive Psychology, Sociology, Design, Law, Economics) of both theoretical and empirical nature. The scientific challenges addressed by eNSEMBLE are:

    • Designing novel collaborative environments and conceptual models;
    • Combining human and artificial agency in collaborative set-ups;
    • Enabling fluid collaborative experiences that support interoperability;
    • Supporting the creation of healthy and sustainable collectives; and
    • Specifying socio-technical norms with legal/regulatory frameworks.

    eNSEMBLE will impact many sectors of society - education, health, industry, science, services, public life, leisure - by improving productivity, learning, care and well-being, as well as participatory democracy.

CONTINUUM
  • Title:
    Collaborative continuum from digital to human
  • Type:
    EQUIPEX+ (Equipement d'Excellence)
  • Duration:
    2020 – 2029
  • Coordinator:
    Michel Beaudouin-Lafon
  • Partners:
    • Centre National de la Recherche Scientifique (CNRS)
    • Institut National de Recherche en Informatique et Automatique (Inria)
    • Commissariat à l'Energie Atomique et aux Energies Alternatives (CEA)
    • Université de Rennes 1
    • Université de Rennes 2
    • Ecole Normale Supérieure de Rennes
    • Institut National des Sciences Appliquées de Rennes
    • Aix-Marseille University
    • Université de Technologie de Compiègne
    • Université de Lille
    • Ecole Nationale d'Ingénieurs de Brest
    • Ecole Nationale Supérieure Mines-Télécom Atlantique Bretagne-Pays de la Loire
    • Université Grenoble Alpes
    • Institut National Polytechnique de Grenoble
    • Ecole Nationale Supérieure des Arts et Métiers
    • Université de Strasbourg
    • COMUE UBFC Université de Technologie Belfort Montbéliard
    • Université Paris-Saclay
    • Télécom Paris - Institut Polytechnique de Paris
    • Ecole Normale Supérieure Paris-Saclay
    • CentraleSupélec
    • Université de Versailles - Saint-Quentin
  • Budget:
    13.6 Meuros public funding from ANR
  • Summary:
    The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding.
GLACIS
  • Title:
    Graphical Languages for Creating Infographics
  • Funding:
    ANR
  • Duration:
    2022 - 2025
  • Coordinator:
    Theophanis Tsandilas
  • Partners:
    • Inria Saclay (Theophanis Tsandilas, Michel Beaudouin-Lafon, Pierre Dragicevic)
    • Inria Sophia Antipolis (Adrien Bousseau)
    • École Centrale de Lyon (Romain Vuillemot)
    • University of Toronto (Fanny Chevalier)
  • Inria contact:
    Theophanis Tsandilas
  • Summary:
    This project investigates interactive tools and techniques that can help graphic designers, illustrators, data journalists, and infographic artists, produce creative and effective visualizations for communication purposes, e.g., to inform the public about the evolution of a pandemic or help novices interpret global-warming predictions.
Living Archive
  • Title:
    Interactive Documentation of Dance Heritage
  • Funding:
    ANR JCJC
  • Duration:
    2020 – 2024
  • Coordinator:
    Sarah Fdili Alaoui
  • Partners:
    Université Paris Saclay
  • Inria contact:
    Sarah Fdili Alaoui
  • Summary:
    The goal of this project is to design accessible, flexible and adaptable interactive systems that allow practitioners to easily document their dance using their own methods and personal artifacts emphasizing their first-person perspective. We will ground our methodology in action research where we seek through long-term commitment to field work and collaboration to simultaneously contribute to knowledge in Human-Computer Interaction and to benefit the communities of practice. More specifically, the interactive systems will allow dance practitioners to generate interactive repositories made of self-curated collections of heterogeneous materials that capture and document their dance practices from their first person-perspective. We will deploy these systems in real-world situations through long-term fieldwork that aims both at assessing the technology and at benefiting the communities of practice, exemplifying a socially relevant, collaborative, and engaged research.
ELEMENT
  • Title:
    Enabling Learnability in Human Movement Interaction
  • Funding:
    ANR
  • Duration:
    2019 - 2022
  • Coordinator:
    Sarah Fdili Alaoui,
  • Partners:
    • Inria (Wendy Mackay)
    • IRCAM (Frédéric Bévilacqua)
    • Université Paris Saclay (Sarah Fdili Alaoui)
    • CNRS (Jules Françoise and Baptiste Caramiaux)
  • Inria contact:
    Sarah Fdili Alaoui
  • Summary:
    The goal of this project is to foster innovation in multimodal interaction, from non-verbal communication to interaction with digital media/content in creative applications, specifically by addressing two critical issues: the design of learnable gestures and movements; and the development of interaction models that adapt to a variety of user's expertise and facilitate human sensorimotor learning. The goal is to move towards movement interactions with “low entry fee with no ceiling on virtuosity”.

10 Dissemination

Participants: Michel Beaudouin-Lafon, Arthur Fages, Sarah Fdili Alaoui, Camille Gobert, Tove Grimstad-Bang, Alexandre Kabil, Janin Koch, Anna Offenwanger, Joanna McGrenere, Wendy Mackay, Miguel Renom, Téo Sanchez, Nicolas Taffin, Theophanis Tsandilas.

10.1 Scientific events: organisation

General chair, scientific chair
Member of the organizing committees
  • ACM DIS 2023, ACM Designing Interactive Systems 2023, Technical Program Chair: Sarah Fdili Alaoui
  • ACM C&C 2023, ACM Creativity & Cognition 2023, Paper Chair: Sarah Fdili Alaoui
  • MOCO 2022, International Conference on Movement and Computing, Steering Committee member: Sarah Fdili Alaoui
  • HHAI 2022, International Conference on Hybrid Human-Artificial Intelligence, Poster and Demo: Janin Koch (chair)
  • CHI 2022, ACM CHI Conference on Human Factors in Computing Systems, Student Research Competition: Janin Koch (jury)
  • CHI 2023, ACM CHI Conference on Human Factors in Computing Systems, Student Design Competition: Wendy Mackay (jury)

10.2 Scientific events: selection

Member of the conference program committees
  • ACM CHI 2023, ACM CHI Conference on Human Factors in Computing Systems: Michel Beaudouin-Lafon, Wendy Mackay, Sarah Fdili Alaoui
  • ACM C&C 2022, ACM Creativity & Cognition 2022: Janin Koch
  • ACM DIS 2022, ACM Designing Interactive Systems 2022: Sarah Fdili Alaoui
  • ICCC 2022, International Conference on Computational Creativity: Janin Koch
  • IJCAI-ECAI 22 AI4G, Special track on AI for Good: Michel Beaudouin-Lafon
  • ACM CHI 2023, Student Design Competition Jury: Wendy Mackay
Reviewer
  • ACM CHI 2023, ACM CHI Conference on Human Factors in Computing Systems: Janin Koch Theophanis Tsandilas, Tove Grimstad Bang, Anna Offenwanger, Camille Gobert
  • ACM UIST 2022, ACM Symposium on User Interface Software and Technology: Theophanis Tsandilas, Janin Koch
  • ACM DIS 2022, ACM Designing Interactive Systems: Tove Grimstad Bang
  • IEEE VIS 2022, IEEE Visualization and Visual Analytics Conference: Theophanis Tsandilas
  • ozCHI 2022, Australian Conference on Human-Computer Interaction: Theophanis Tsandilas

10.3 Journal

Member of the editorial boards
  • Editor for the Human-Computer Interaction area of the ACM Books Series: Michel Beaudouin-Lafon (2013-)
  • TOCHI, Transactions on Computer Human Interaction, ACM: Michel Beaudouin-Lafon (2009-), Wendy Mackay (2016-)
  • JIPS, Journal d'Interaction Personne-Système, AFIHM: Michel Beaudouin-Lafon (2009-)
  • ACM Tech Briefs: Michel Beaudouin-Lafon (2021-)
  • JAIR, Journal of Artificial Intelligence Research, Special Issue on Human-Centred Computing: Wendy Mackay (co-editor) (2022-2023)
  • ACM New Publications Board: Wendy Mackay (2020-)
  • CACM Editorial Board Online: Wendy Mackay (2020-)
  • CACM Website Redesign: Wendy Mackay (2022)
Reviewer - reviewing activities
  • IEEE TVCG, IEEE Transactions on Visualization and Computer Graphics: Theophanis Tsandilas
  • Elsevier Computers and Graphics: Theophanis Tsandilas
  • ACM HCIJ, Human Computer Interaction Journal: Janin Koch

10.4 Invited talks

  • Participatory Design for Hybrid Intelligent Systems”. (Invited Lecture) Télécom Paris, 26 January 2021: Wendy Mackay
  • Human-Computer Partnerships”. (Keynote) RTE Research Seminar, Paris, 1 February 2022: Wendy Mackay
  • Interaction instrumentale et substrats interactifs”. (Invited lecture) Chaire annuelle Informatique et Sciences Numériques du Collège de France, Paris, March 2022: Michel Beaudouin-Lafon
  • Human-Centered AI”. (Invited Panel) GESDA 2022 Science Breakthrough Radar Workshop 1: Geneva Science and Diplomacy Anticipator: anticipating future advances around Collective Intelligence. 30 March 2022: Wendy Mackay
  • Participatory Design for Intelligent Systems”. (Invited Lecture) HumanE AI Net Workshop Designing Human-Centric AI Curricula. Online: 31 March 2022: Wendy Mackay
  • Les Femmes dans l’IHM”. (Presentation) Conference IHM, Namur, Belgium, 7 April 2022: Sarah Fdili Alaoui
  • HCI and Music”. (Keynote) CHIME Computer Human Interaction and Music nEtwork Workshop. Online: 27 April 2022: Wendy Mackay
  • Human-Centered AI”. (Invited Panel) GESDA 2022 Science Breakthrough Radar Workshop 2: Geneva Science and Diplomacy Anticipator: anticipating future advances around Collective Intelligence. 10 May 2022: Wendy Mackay
  • Supporting the European digital sovereignty with a network of personal containers”. (Invited lecture) German-French Conference on European Digital Sovereignty. Munich, Germany, 11 May 2022: Camille Gobert
  • Participatory Design”. (Invited Lecture) Ecole Boule, Paris. 17 May 2022: Wendy Mackay
  • Human-Computer Partnerships” (Keynote) Journées Doctorales, Université Paris-Saclay, 22 May, 2022: Wendy Mackay
  • Information Theory Meets Human-Computer Partnerships”. (Invited Lecture) Colloque Human-Computer Partnerships, Chaire annuelle Informatique et Sciences Numériques du Collège de France, Paris, 23 May 2022: Michel Beaudouin-Lafon
  • Balancing Trust and Risk”. (Invited Panel) 75th anniversary of ACM, San Francisco, USA, 10 June 2022: Michel Beaudouin-Lafon
  • Le mouvement sonore pour la documentation et la transmission de la danse”. (Invited Lecture) Rencontres thématiques – Design des Dispositifs – autour de l'exposition “Le village, ENSCI, ENS Paris-Saclay", 10 juin 2022: Tove Grimstad Bang
  • Human-Computer Partnerships” (Keynote) HHAI 2022, International Conference on Hybrid Human-Artificial Intelligence, Vrije Universiteit, Amsterdam. 16 June, 2022: Wendy Mackay
  • Augment rather than replace humans: Focus on interaction not algorithms” (Invited talk) Dagstuhl Seminar, Wadern, Germany. 27 June 2022: Wendy Mackay
  • Participatory Design” (Invited Lecture) Creartathon 2022, Université Paris-Saclay. 7 July 2022: Wendy Mackay
  • Creating Human-Computer Partnerships” (Keynote and Panel) ICML 2022 Conference, HMCaT, Amsterdam, The Netherlands. 23 July 2022: Wendy Mackay
  • How can we better interact with programming languages?”, (Invited talk) MDENet Thematic Workshop on HCI (online), 27 September 2022: Camille Gobert
  • Cultiver la créativité et l’intelligence humaine : une urgence à l'ère des robots ?". (Invited Panel) Le Mondial du Batîment, Paris. 6 October, 2022: Wendy Mackay.
  • Performance as a site for (critical) research & design". (Invited Lecture) University of Nantes, France, 10 October 2022: Sarah Fdili Alaoui
  • How Can We Prepare for Collaborative Human-Machine Intelligence?". (Invited Panel) Geneva Science and Diplomacy Anticipator (GESDA), Geneva, Switzerland, 14 October 2022: Wendy Mackay
  • Reflecting on how extended reality could support creative tasks.” (Invited Lecture) Joint ERCIM-JST Workshop, Inria Rocquencourt, 20 October 2022: Theophanis Tsandilas
  • Probing dance practice, learning and transmission”. (Invited Lecture) Element days, IRCAM, Paris, 22 October 2022 : Sarah Fdili Alaoui
  • Towards Intelligent Tangible Interfaces”, (Keynote) ETIS (European Tangible Interaction Studio) Toulouse, 9 November 2022: Wendy Mackay
  • Embodied Interaction in Dance : methods, creations and critical reflections”. (Keynote) Workshop on embodied perspectives on musical AI, University of Oslo, Norway, 21 November 2022 : Sarah Fdili Alaoui
  • Défis de l’Interaction Humain-Machine” (Keynote) Journées Scientifiques Inria, Roquencourt, 25 November 2022: Wendy Mackay
  • Lecture Series for the Collège de France: “Interagir avec l'ordinateur” by Wendy Mackay. From 24 February to 19 April 2022.
    • Leçon Inaugurale : Réimaginer nos interactions avec le monde numérique”, Collège de France, Paris. 24 February 2022: Wendy Mackay.
    • Leçon 1 : Les capacités humaines pour l'interaction”, Collège de France, Paris. 1 March 2022: Wendy Mackay.
    • Leçon 2 : Les capacités de l'ordinateur pour l'interaction.”, Collège de France, Paris. 8 March 2022: Wendy Mackay.
    • Leçon 3 : La conception de systèmes interactifs ”, Collège de France, Paris. 15 March 2022: Wendy Mackay.
    • Leçon 4 : L'évaluation des systèmes interactifs ”, Collège de France, Paris. 22 March 2022: Wendy Mackay.
    • Leçon 5 : L'interaction multimodale : comment interagir avec tout le corps”, Collège de France, Paris. 29 March 2022: Wendy Mackay.
    • Leçon 6 : La réalité augmentée et virtuelle : comment intégrer l'informatique avec le monde réel Interagir avec l'ordinateur”, Collège de France, Paris. 5 April 2022: Wendy Mackay.
    • Leçon 7 : La communication médiatisée : comment concevoir les systèmes collaboratifs.”, Collège de France, Paris. 12 April 2022: Wendy Mackay.
    • Leçon 8 : Les partenariats humain-machine : comment interagir avec l'intelligence artificielle Interagir avec l'ordinateur”, Collège de France, Paris. 19 April 2022: Wendy Mackay.

10.5 Leadership within the scientific community

  • PEPR eNSEMBLE: Michel Beaudouin-Lafon (co-director), Wendy Mackay (co-directeur, Axe 5)
  • CONTINUUM research infrastructure: Michel Beaudouin-Lafon (Scientific director)
  • Laboratoire Interdisciplinaire des Sciences du Numérique (LISN): Michel Beaudouin-Lafon (adjunct director)
  • RTRA Digiteo (Research network in Computer Science), Université Paris-Saclay: Michel Beaudouin-Lafon (Director)
  • ACM Technology Policy Council: Michel Beaudouin-Lafon (Vice-chair)
  • ACM Policy Award committee: Michel Beaudouin-Lafon (chair)
  • Inria Commission Consultative Paritaire (CCP): Wendy Mackay (president)
  • Dutch Hybrid Intelligence Consortium: Hybrid AI Scientific Advisory Board, The Netherlands: Wendy Mackay (2021-).

10.6 Scientific expertise

  • External reviewer for the Dutch Research Council (research grants): Theophanis Tsandilas, Wendy Mackay
  • Jury member for CRCN and ISFP Inria competition: Theophanis Tsandilas
  • Jury member for the Creartathon 2022, University Paris-Saclay, Student design competition: Joanna McGrenere
  • Panel member (PE-6) of the Qualitative Evaluation of Completed Projects of the European Research Council (ERC): Michel Beaudouin-Lafon
  • External reviewer for Natural Sciences and Engineering Research Council of Canada (NSERC): Michel Beaudouin-Lafon
  • External evaluator for ANR project: Michel Beaudouin-Lafon
  • Télécom Paris PhD program "Futurs et Ruptures" panel member: Michel Beaudouin-Lafon

10.6.1 Research administration

  • "Comité de Séléction", Université d'Evry, Poste de Professeur en Langue anglaise et Jeu vidéo, Web, Nouveaux Médias. Wendy Mackay (jury member).
  • “Commission Scientifique”, Inria: Theophanis Tsandilas (member)
  • “Référent Données” pour Inria Saclay: Theophanis Tsandilas
  • Paris SIGCHI: Theophanis Tsandilas (vice-president)
  • Inria Paris-Saclay scientific mediation for Art, Science and Society: Janin Koch (co-organizer)
  • ACM Europe Technology Policy Committee: Michel Beaudouin-Lafon (member)
  • ACM Europe Research Visibility Working Group (RAISE): Michel Beaudouin-Lafon (member)

10.7 Teaching - Supervision - Juries

10.7.1 Teaching

  • International Masters: Theophanis Tsandilas, Probabilities and Statistics, 32h, M1, Télécom Sud Paris, Institut Polytechnique de Paris
  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Wendy Mackay, Fundamentals of Situated Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Sarah Fdili Alaoui, Creative Design, 21h, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Sarah Fdili Alaoui, Studio Art Science in collaboration with Centre Pompidou, 21h, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Fundamentals of Human-Computer Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Groupware and Collaborative Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Wendy Mackay, HCI Winter School 21 hrs, M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Wendy Mackay and Janin Koch, Design of Interactive Systems, 42 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Wendy Mackay and Janin Koch, Advanced Design of Interactive Systems, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Inria & Université Paris-Saclay Creartathon Master Classes, 08 July: Michel Beaudouin-Lafon, Janin Koch, Wendy Mackay
  • Licence Informatique: Michel Beaudouin-Lafon, Introduction to Human-Computer Interaction, 9h, second year, Univ. Paris-Saclay

PhD students - As teaching assistants

  • Polytech App5: Arthur Fages Réalité virtuelle et intéractions, 48h, M2, Polytech Paris-Saclay, Univ. Paris-Saclay
  • Polytech Et3: Arthur Fages Projet Java-Graphique IHM, 24h, L3, Polytech Paris-Saclay, Univ. Paris-Saclay
  • Bachelor math and computer science: Téo Sanchez Introduction to imperative programming, 25h, L1, Univ. Paris-Saclay
  • Bachelor math and computer science: Téo Sanchez Introduction to computer science, 24h, L1, Univ. Paris-Saclay
  • Bachelor math and computer science: Téo Sanchez Introduction to data science, 24h, L1, Univ. Paris-Saclay
  • Interaction & HCID Masters: Téo Sanchez, Interactive Machine Learning, 12h, M1, Univ. Paris-Saclay
  • License Informatique: Miguel Renom, Programmation Web, 24h, L3, Polytech Paris-Saclay
  • Interaction & HCID Masters: Camille Gobert, Advanced Programming of Interactive Systems, 21h, M1/M2, Univ. Paris-Saclay
  • Computer Science Bachelor: Camille Gobert, Introduction à l'Interaction Humain-Machine, 12h, L2, Univ. Paris-Saclay
  • Licence Double Diplôme Informatique, Mathématiques: Capucine Nghiem, TP: Introduction à la Programmation Impérative, 22h, L1, Univ. Paris-Saclay
  • Licence Informatique: Capucine Nghiem, TP: Introduction à l'Informatique Graphique, 10h, L1, Univ. Paris-Saclay

10.7.2 Supervision

PhD students

  • Defended PhD: Téo Sanchez, Interactive Machine Teaching with and for Novices, 20 June 2022. Advisors: Baptiste Caramiaux & Wendy Mackay
  • Defended PhD: Miguel Renom, Theoretical bases of human tool use in digital environments, 12 April 2022. Advisors: Michel Beaudouin-Lafon & Baptiste Caramiaux
  • Defended PhD: Han Han, Designing Representations for Digital Documents, 30 March 2022. Advisor: Michel Beaudouin-Lafon
  • Defended PhD: Elizabeth Walton, Dance style transitions : from dancers' practice to movement-based technology, 18 March 2022. Advisor: Wendy Mackay
  • PhD in progress: Eya Ben Chaaben, Exploring Human-AI Collaboration and Explainability for Sustainable ML, since November 2022. Advisor: Wendy Mackay, Janin Koch
  • PhD in progress: Tove Grimstad Bang. Somaesthetics applied to dance documentation and transmission, since September 2021. Advisor: Sarah Fdili Alaoui
  • PhD in progress: Manon Vialle. Vizualization of Duncan Movement Qualities, since September 2020. Advisor: Sarah Fdili Alaoui and Melina Skouras
  • PhD in progress: Alexandre Battut, Interactive Instruments and Substrates for Temporal Media, since April 2020. Advisor: Michel Beaudouin-Lafon
  • PhD in progress: Camille Gobert, Interaction Substrates for Programming, since October 2020. Advisor: Michel Beaudouin-Lafon
  • PhD in progress: Martin Tricaud, Instruments and Substrates for Procedural Creation Tools, since October 2019. Advisor: Michel Beaudouin-Lafon
  • PhD in progress: Arthur Fages, Supporting Collaborative 3D Modeling through Augmented-Reality Spaces, since December 2019. Advisor: Theophanis Tsandilas and Cédric Fleury (IMT Atlantique)
  • PhD in progress: Capucine Nghiem, Speech-Assisted Design Sketching with an Application to e-Learning, since October 2021. Advisor: Theophanis Tsandilas and Adrien Bousseau (Inria Sophia-Antipolis)
  • PhD in progress: Anna Offenwanger, Grammars and Tools for Sketch-Driven Visualization Design, since October 2021. Advisor: Theophanis Tsandilas and Fanny Chevalier (University of Toronto)

Masters students

  • Stephanie Vo, “Typoplex - Exploring Typography in Context”: Janin Koch (advisor)
  • Samuel Le Berre, “Precipitated Convergence : An approach for Design Sprints by joining Technologies and Arts.”: Wendy Mackay, Janin Koch (scientific advisor)
  • Shujian GUAN, “AniMojiBoard: Gesture-based Animated Emojis”: Wendy Mackay (advisor)
  • Dylan Fluzin and Leo Cheddin, “Queering dance archive”: Sarah Fdili Alaoui (scientific advisor)
  • Lea Paymal, “Exploring loops in design”: Sarah Fdili Alaoui (scientific advisor)
  • Kevin Ratovo, “PickHis: Giving context to items in an history by connecting them”: Michel Beaudouin-Lafon and Alexandre Battut (advisors)
  • Bastien Destephen, “Instruments et interfaces pour la création procédurale dans le design et les arts numériques”: Michel Beaudouin-Lafon and Martin Tricaud (advisors)
  • Alexandre Pham and Raphaël Bournet, “Résumés d'historiques et visualisation de graphes de fichiers”: Michel Beaudouin-Lafon, Han Han and Julien Gori (advisors)
  • Xun Gong: “Customization of visualization graphics”: Theophanis Tsandilas

10.7.3 Juries

PhD theses

  • PhD defense of Zhuoming Zhang, “Improving Mediated Touch Interaction with Multimodality”, Télécom Paris, April 2022: Theophanis Tsandilas (reviewer)
  • PhD defense of Masrour Makaremi, “Interface praticien – nouvelles technologies en orthopédie dento-faciale : apport des sciences cognitives”, Université de Bordeaux: 23 June 2022: Wendy Mackay(Reviewer)
  • PhD defense of Pierre Mahieux, “Interactions tangibles pour naviguer spatialement et temporellement en Environnements Virtuels - Application à la médiation culturelle en histoire des sciences & techniques”, ENIB, July 2022: Michel Beaudouin-Lafon (reviewer)
  • PhD defense of Alexandre Eiselmeyer, ”Designing the Right Experiment Right: Interactive systems to support trade-off and sample size decisions in HCI Experiment Design”, University of Zurich, Switzerland, 4 October 2022: Wendy Mackay (reviewer)
  • PhD defense of Alice Martin, “Concepts and Tools for Interactive Computing”, ENAC, Toulouse, November 2022: Michel Beaudouin-Lafon (Reviewer)
  • PhD defense of Olivain Porry, “Des communautés de machines”, SACRe, University of PSL, ENSAD, 2022: Sarah Fdili Alaoui (External member)
  • PhD defense of Alexis Pister, “Visual Analytics for Historical Social Networks: Traceability, Exploration, and Analysis”, Université Paris-Saclay: 15 December 2022: Wendy Mackay (President)

Habilitations

  • Thomas Pietrzak, “On the critical role of the sensorimotor loop on the design of interaction techniques and interactive devices”, Univ. Lille, July 2022: Michel Beaudouin-Lafon (President)
  • Catherine Letondal, “Interaction technique et individuation”, ENAC, March 2022: Michel Beaudouin-Lafon (member)

10.8 Popularization

  • CreARTathon 20222nd Creative Hackathon, 07-17.07 2022: Janin Koch, Wendy Mackay, Nicolas Taffin
  • “Système multi-vues pour collaborer à distance avec un utilisateur en réalité augmentée”, interview for the TV afternoon news of RTBF (`Radio-télévision belge de la Communauté française”), Namur, 9 April, 2022: Arthur Fages
  • Demonstration for the Université Paris-Saclay, Graduate School - Presentation of the evaluation of a collaborative environnement in Virtual Reality, 8-11-22 Alexandre Kabil

10.8.1 Education

  • High-school textbook: Michel Beaudouin-Lafon (editor and co-author) 39. Numérique et Sciences Informatiques (NSI), Tle spécialité (2021), Michel Beaudouin-Lafon, Céline Chevalier, Gilles Grimaud, Benoit Groz, Philippe Marquet, Mathieu Nancel, Cristel Pelsser, Xavier Redon, Thomas Vantroys, Emmanuel Waller. Hachette Education, Paris, France. 352 pages. ISBN 978-2-01-786634-3
  • Book based on the Inaugural Lecture at the Collège de France. "Réimaginer Nos Interactions avec le Monde Numérique". Librairie Arthème Fayard et Collège de France, Paris, France. 80 pages. November 2022 ISBN: 978-2-213-72507-9

10.8.2 Interventions

11 Scientific production

11.1 Major publications

  • 1 articleM.Michel Beaudouin-Lafon, S.Susanne Bødker and W.Wendy Mackay. Generative Theories of Interaction.ACM Transactions on Computer-Human Interaction286November 2021, Article 45, 54 pages
  • 2 inproceedingsA.Alexander Eiselmayer, C.Chat Wacharamanotham, M.Michel Beaudouin-Lafon and W.Wendy Mackay. Touchstone2: An Interactive Environment for Exploring Trade-offs in HCI Experiment Design.CHI 2019 - The ACM CHI Conference on Human Factors in Computing SystemsProceedings of the 2019 CHI Conference on Human Factors in Computing Systems217ACMGlasgow, United KingdomACMMay 2019, 1--11
  • 3 inproceedingsJ.Jules Françoise, S.Sarah Fdili Alaoui and Y.Yves Candau. CO/DA: Live-Coding Movement-Sound Interactions for Dance Improvisation.Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsCHI '22 - Conference on Human Factors in Computing Systems482New Orleans, LA, United StatesACMApril 2022, 1-13
  • 4 articleJ.Janin Koch, N.Nicolas Taffin, M.Michel Beaudouin-Lafon, M.Markku Laine, A.Andrés Lucero and W.Wendy Mackay. ImageSense: An Intelligent Collaborative Ideation Tool to Support Diverse Human-Computer Partnerships.Proceedings of the ACM on Human-Computer Interaction 4CSCW1May 2020, 1-27
  • 5 inproceedingsW.Wanyu Liu, R.Rafael Lucas D'Oliveira, M.Michel Beaudouin-Lafon and O.Olivier Rioul. BIGnav: Bayesian Information Gain for Guiding Multiscale Navigation.ACM CHI 2017 - International conference of Human-Computer InteractionDenver, United StatesMay 2017, 5869-5880
  • 6 inproceedingsM.Miguel Renom, B.Baptiste Caramiaux and M.Michel Beaudouin-Lafon. Exploring Technical Reasoning in Digital Tool Use.CHI 2022 - ACM Conference on Human Factors in Computing SystemsNew Orleans, LA, United StatesApril 2022, 1-17
  • 7 inproceedingsT.Téo Sanchez, B.Baptiste Caramiaux, P.Pierre Thiel and W. E.Wendy E. Mackay. Deep Learning Uncertainty in Machine Teaching.IUI 2022 - 27th Annual Conference on Intelligent User InterfacesHelsinki / Virtual, FinlandFebruary 2022
  • 8 inproceedingsT.Theophanis Tsandilas and P.Pierre Dragicevic. Gesture Elicitation as a Computational Optimization Problem.ACM Conference on Human Factors in Computing Systems (CHI ’22)New Orleans, United StatesApril 2022
  • 9 articleT.Theophanis Tsandilas. Fallacies of Agreement: A Critical Review of Consensus Assessment Methods for Gesture Elicitation.ACM Transactions on Computer-Human Interaction253June 2018, 1-49
  • 10 articleT.Theophanis Tsandilas. StructGraphics: Flexible Visualization Design through Data-Agnostic and Reusable Graphical Structures.IEEE Transactions on Visualization and Computer Graphics272October 2020, 315-325

11.2 Publications of the year

International journals

International peer-reviewed conferences

Conferences without proceedings

  • 31 inproceedingsA.Adam Dahlgren Lindström, W. E.Wendy E. Mackay and V.Virginia Dignum. Thinking Fast And Slow In Human-Centered AI.Thinking Fast and Slow and Other Cognitive Theories in AI, AAAI Fall symposium FSS-22Arlington, Virginia, United StatesNovember 2022, 3 pages
  • 32 inproceedingsA.Arthur Fages, C.Cédric Fleury and T.Theophanis Tsandilas. ARgus: Multi-View System to Collaborate Remotely with an Augmented Reality User.IHM ’22 : 33e conférence Francophone sur l’Interaction Homme-Machine; DemonstrationIHM ’22 - 33ème Conférence Francophone sur l’Interaction Homme-MachineDemonstrationNamur, BelgiumApril 2022
  • 33 inproceedingsM.Miriam Sturdee, M.Makayla Lewis, M.Mafalda Gamboa, T.Thuong Hoang, J.John Miers, I.Ilja Šmorgun, P.Pranjal Jain, A.Angelika Strohmayer, S.Sarah Fdili Alaoui and C.Christina Wodtke. The State of the (CHI)Art.CHI '22: CHI Conference on Human Factors in Computing SystemsNew Orleans LA USA, FranceACMApril 2022, 1-6

Scientific book chapters

Doctoral dissertations and habilitation theses

  • 35 thesisH.Han Han. Designing Representations for Digital Documents.Université Paris-SaclayMarch 2022
  • 36 thesisM. A.Miguel A. Renom. Theoretical bases of human tool use in digital environments.Université Paris-SaclayApril 2022
  • 37 thesisE.Elizabeth Walton. Dance style transitions : from dancers' practice to movement-based technology.Université Paris-SaclayMarch 2022

Other scientific publications

  • 38 miscA. R.Anna Rose Lucy Carter, M.Miriam Sturdee, A.Alan Dix, D. K.Dani Kalarikalayil Raju, M.Martha Aldridge, E.Eunice Sari, W.Wendy Mackay and E.Elizabeth Churchill. InContext: Futuring User-Experience Design Tools.April 2022, 1-6 article 95

11.3 Other

Scientific popularization

11.4 Cited publications

  • 41 inproceedingsM.Michel Beaudouin-Lafon. Instrumental Interaction: An Interaction Model for Designing post-WIMP User Interfaces.Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsCHI '00New York, NY, USAThe Hague, The NetherlandsACM2000, 446--453URL: http://doi.acm.org/10.1145/332040.332473
  • 42 inproceedingsM.Michel Beaudouin-Lafon and W. E.Wendy E. Mackay. Reification, Polymorphism and Reuse: Three Principles for Designing Visual Interfaces.Proceedings of the Working Conference on Advanced Visual InterfacesAVI '00New York, NY, USAPalermo, ItalyACM2000, 102--109URL: http://doi.acm.org/10.1145/345513.345267
  • 43 inproceedingsC.Camille Gobert and M.Michel Beaudouin-Lafon. Interactive Intermediate Representations for LaTeX Code Manipulation.32ème Conférence Francophone sur l’Interaction Homme-Machine (IHM ’20'21)Actes de la 32ème Conférence Francophone sur l’Interaction Homme-Machine (IHM ’20'21)Virtual Event, FranceApril 2021, 11 pages
  • 44 articleW. E.Wendy E. Mackay, J.John Shawe-Taylor and F.Frank van Harmelen. Human-Centered Artificial Intelligence (Dagstuhl Seminar 22262).Dagstuhl Reports126Keywords: Human-centered Artificial Intelligence, Human-Computer Interaction, Hybrid Intelligence2023, 112--117URL: https://drops.dagstuhl.de/opus/volltexte/2023/17457
  • 45 articleF.François Osiurak and A.Arnaud Badets. Tool Use and Affordance: Manipulation-Based Versus Reasoning-Based Approaches.Psychological Review12352016, 35 pages
  • 46 articleJ.Janet Rafner, D.Dominik Dellermann, A.Arthur Hjorth, D.Dóra Verasztó, C.Constance Kampf, W.Wendy Mackay and J.Jacob Sherson. Deskilling, Upskilling, and Reskilling: a Case for Hybrid Intelligence.Morals & Machines22021, 24-39