EN FR
EN FR


Section: New Results

Human-Computer Partnerships

Participants : Wendy Mackay [correspondant] , Baptiste Caramiaux, Téo Sanchez, Carla Griggio, Shu Yuan Hsueh, Wanyu Liu, Joanna Mcgrenere, Midas Nouwens.

ExSitu is interested in designing effective human-computer partnerships, in which expert users control their interaction with technology. Rather than treating the human users as the 'input' to a computer algorithm, we explore human-centered machine learning, where the goal is to use machine learning and other techniques to increase human capabilities. Much of human-computer interaction research focuses on measuring and improving productivity: our specific goal is to create what we call 'co-adaptive systems' that are discoverable, appropriable and expressive for the user.

In creative practices, human-centred machine learning facilitates the workflow for creatives to explore new ideas and possibilities. We compiled recent research and development advances in human-centred machine learning and artificial intelligence (AI), within the field of creative industries, in a white paper commissioned by the NEM (New European Media) initiative [35]. We explored the use of Deep Reinforcement Learning in the context of sound design with sound design experts [37]. We first conducted controlled studies where we compared manual exploration versus exploration by reinforcement. This helped us design a fully working system that we assessed in workshops with expert designers. We showed that an algorithmic sound explorer learning from human preferences enhances the creative process by allowing holistic and embodied exploration as opposed to analytic exploration afforded by standard interfaces.

We also explored how users create their own ecosystems of communication apps as a way to support rich, personalized forms of expression [12]. We wanted to gather data about how people customize apps to enable more personal forms of expression, and how such customizations shape their everyday communication. Given the increasing use of multiple apps with overlapping communication features, we were also interested in how customizing one app influences communication via other apps. We created a taxonomy of customization options based on interviews with 15 “extreme users” of communication apps. We found that participants tailored their apps to express their identities, organizational culture, and intimate bonds with others. They also experienced expression breakdowns: frustrations around barriers to transferring personal forms of expression across apps, which inspired inventive workarounds to maintain cross-app habits of expression, such as briefly switching apps to generate and export content for a particular conversation. We conclude with implications for personalized expression in ecosystems of communication apps.

We investigated the special communication practices between couples [20]. Research shows that sharing streams of contextual information, e.g. location and motion, helps couples coordinate and feel more connected. We studied how couples’ communication changes when sharing multiple, persistent information streams. We designed Lifelines, a mobile-app technology probe that visualizes up to six streams on a shared timeline: closeness to home, battery level, steps, media playing, texts and calls. A month-long study with nine couples showed that partners interpreted information mostly from individual streams, but also combined them for more nuanced interpretations. Persistent streams allowed missing data to become meaningful and provided new ways of understanding each other. Unexpected patterns from any stream can trigger calls and texts, whereas seeing expected data can replace direct communication, which may improve or disrupt established communication practices.

Finally, we extended our earlier work on the Expressive Keyboard by adding animated emojis as a form of expressive output for messaging apps. An initial user study identified both the cumbersome nature of inserting emojis and the creative ways that users construct emoji sequences to convey rich, nuanced non-verbal expressions, including emphasis, change of expressions, and micro stories. We then developed MojiBoard [17], an emoji entry technique that lets users generate dynamic parametric emojis from a gesture keyboard. Here, the form of the user's gesture is transformed into an animation, allowing users to “draw” dynamic expressions through their own movements. MojiBoard lets users switch seamlessly between typing and parameterizing emojis. MojiBoard provides an example of how we can transform a user's gesture into an expressive output, which is reified into an emoji than can be interacted with again.

Wendy Mackay describes how the theoretical foundation of the CREATIV ERC Advance Grant, based on the principle of co-adaptation, influenced her research with musicians, choreographers, graphic designers and other creative professionals. The interview is published in the book “New Directions in Music and Human-Computer Interaction”, Springer Nature, as a chapter entitled “HCI, Music and Art: An Interview with Wendy Mackay” [34]. Along the same lines, she contributed to a chapter “A Design Workbench for Interactive Music Systems” [33] that discusses possible links between the fields of computer music and human-computer interaction (HCI), particularly in the context of the MIDWAY project between Inria, France and McGill University, Canada. The goal of MIDWAY was to construct a “musical interaction design workbench” to facilitate the exploration and development of new interactive technologies for musical creation and performance by bringing together useful models, tools, and recent developments from computer music and HCI. These models and tools have helped expand the possibilities for enhancing musical expression, and provide HCI researchers with a better foundation for the design of tools for “extreme” users.