EN FR
EN FR

2023Activity reportProject-TeamVIRTUS

RNSR: 202224309G
  • Research center Inria Centre at Rennes University
  • In partnership with:Université Haute Bretagne (Rennes 2), Université de Rennes
  • Team name: The VIrtual Us
  • In collaboration with:Institut de recherche en informatique et systèmes aléatoires (IRISA), Mouvement, Sport, Santé (M2S)
  • Domain:Perception, Cognition and Interaction
  • Theme:Interaction and visualization

Keywords

Computer Science and Digital Science

  • A5.5.4. Animation
  • A5.6.1. Virtual reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.11.1. Human activity analysis and recognition
  • A9.3. Signal analysis

Other Research Topics and Application Domains

  • B1.2. Neuroscience and cognitive science
  • B2.8. Sports, performance, motor skills
  • B7.1.1. Pedestrian traffic and crowds
  • B9.3. Medias
  • B9.5.6. Data science

1 Team members, visitors, external collaborators

Research Scientists

  • Julien Pettré [Team leader, INRIA, Senior Researcher, HDR]
  • Samuel Boivin [INRIA, Researcher]
  • Ludovic Hoyet [INRIA, Researcher, HDR]
  • Katja Zibrek [INRIA, ISFP, from Oct 2023]

Faculty Members

  • Kadi Bouatouch [UNIV RENNES, emeritus, HDR]
  • Marc Christie [UNIV RENNES, Associate Professor]
  • Anne Hélène Olivier [UNIV RENNES II, Associate Professor, HDR]

PhD Students

  • Kelian Baert [TECHNICOLOR, CIFRE, from Sep 2023]
  • Jean-Baptiste Bordier [UNIV RENNES, from Oct 2023]
  • Thomas Chatagnon [INRIA]
  • Philippe De Clermont Gallerande [INTERDIGITAL, CIFRE, from Feb 2023]
  • Céline Finet [INRIA, from Oct 2023]
  • Alexis Jensen [UNIV RENNES]
  • Alberto Jovane [INRIA, until Feb 2023]
  • Jordan Martin [LCPP, CIFRE, from Feb 2023]
  • Lucas Mourot [TECHNICOLOR, CIFRE, until May 2023]
  • Yuliya Patotskaya [INRIA]
  • Xiaoyuan Wang [ENS RENNES]
  • Tony Wolff [UNIV RENNES, from Nov 2023]
  • Tairan Yin [INRIA]

Technical Staff

  • Rémi Cambuzat [INRIA, Engineer]
  • Bhaswar Gupta [INRIA, Engineer, from Nov 2023]
  • Anthony Mirabile [INRIA, Engineer, until Sep 2023]

Interns and Apprentices

  • Arthur Audrain [INRIA, Intern, from Dec 2023]
  • Ali Ghammaz [INRIA, Intern, from May 2023 until Aug 2023]
  • Taylor Holloway [University Wilfried Laurier, Canada, from May 2023 until Jun 2023]
  • Dorian Le Cloirec–Le Gall [INRIA, Intern, from Jul 2023 until Aug 2023]
  • Emily Vandenberg [University Wilfried Laurier, Canada, from May 2023 until Jun 2023]

Administrative Assistant

  • Gwenaelle Lannec [INRIA]

Visiting Scientists

  • Juliane Adrian [Juelich Research Center, Germany, until Feb 2023]
  • Arnau Colom Pasquale de Riquelme [Pompeu Fabra University; Barcelona, Spain, from Oct 2023]
  • Sina Feldmann [ Juelich Research Center, Germany, until Feb 2023]
  • Marilena Lemonari [University of Cyprus, Cyprus, from Sep 2023]
  • Melania Prieto Martin [Univ. Rey Juan Carlos, Madrid, Spain, from Aug 2023]
  • Thomas Uboldi [Univ. Brest, until Feb 2023]

External Collaborator

  • Christian Bouville [Volunteer Scientific support]

2 Overall objectives

Numerical simulation is a tool at the disposal of the scientist for the understanding and the prediction of real phenomena. The simulation of complex physical behaviours is, for example, a perfectly integrated solution in the technological design of aircrafts, cars or engineered structures in order to study their aerodynamics or mechanical resistance. The economic and technological impact of simulators is undeniable in terms of preventing design flaws and saving time in the development of increasingly complex systems.

Let us now imagine the impact of a simulator that would incorporate our digital alter-egos as simulated doubles that would be capable of reproducing real humans, in their behaviours, choices, attitudes, reactions, movements, or appearances. The simulation would not be limited to the identification of physical parameters such as the shape of an object to improve its mechanical properties, but would also extend to the identification of functional or interaction parameters and would significantly increase its scope of application. Also imagine that this simulation is immersive, that is to say that beyond the simulation of humans, we would enable real users to share the experience of a simulated situation with their digital alter-egos. We would then open up a wide field of possibilities towards studies accounting for psychological and sociological parameters - which are key to decipher human behaviors - and furthermore experiential, narrative or emotional dimensions. A revolution, but also a key challenge as putting the human being into equations, following all its aspects and dimensions, is extremely difficult. This is the challenge we propose to tackle by exploring the design and applications of immersive simulators for scenes populated by virtual humans.

This challenge is transdisciplinary by nature. The human being can be considered under the eye of Physics, Psychology, Sociology, Neurosciences, which all have triggered many research topics at the interface with Computer Science: biomechanical simulation, animation and graphics rendering of characters, artificial intelligence, computational neurosciences, etc. In such a context, our transversal activity aims to create a new generation of virtual, realistic, autonomous, expressive, reactive and interactive humans to populate virtual scenes. Harnessing the ambition of designing an immersive virtual population, our Inria project-team The Virtual Us (our virtual alter-egos, code name VirtUs) has the ambition (shorten as “VirtUs simulators”) where both virtual and real humans coexist, with a sufficient level of realism so that the experience lived virtually and its results can be transposed to reality. The achievement of this goal allows us to use VirtUs simulators to better digitally replicate them, better interact with them and thus create a new kind of narrative experiences as well as new ways to observe and understand human behaviours.

3 Research program

3.1 Scientific Framework and Challenges

Immersive simulation of populated environments is currently undergoing a revolution on several fronts, whose origins go well beyond the limits of this single theme. Immersive technologies are experiencing an industrial revolution, and are now available at low cost to a very wide range of users. Software solutions for simulation have also known an industrial revolution: generic solutions from the world of video games are now commonly available (e.g., Unity, Unreal) to design interactive and immersive virtual worlds in a simplified way. Beyond the technological aspects, simulation techniques, and in particular human-related processes, are undergoing the revolution of machine learning, with a radical shift in their paradigm: from a procedural approach that tends to reproduce the mechanisms by which humans make decisions and carry out actions, we are moving towards solutions that tend to directly reproduce the results of these mechanisms from a large statistics of past behaviors. On a broad horizon, these revolutions radically change the interactions between digital systems and the real world including us, suddenly bringing them much closer to certain human capacities to interpret or predict the world, to act or react to it. These technological and scientific revolutions necessarily reposition the application framework of simulators, also opening up new uses, such as the use of immersive simulation as a learning ground for intelligent systems or for data collection on human-computer interaction.

Figure 1.a
Figure 1.b
Figure 1.c

From left to right, illustrations of VirtUs simulators usage Scenario S1, S2 and S3

From left to right, illustrations of VirtUs simulators usage Scenario S1, S2 and S3

Figure1: From left to right, illustrations of VirtUs simulators usage Scenario S1, S2 and S3

The VirtUs team's proposal is fully in line with this revolution. Typical usage of VirtUs simulators in a technological or scientific setting are, for instance, illustrated in Figure 1 through the following example scenarios:

  • S1 -
    We want to model collective behaviours from a microscopic perspective, i.e., to understand the mechanisms of local human interactions and how they result into emergent crowd behaviours. The VirtUs simulator is used to: immerse real users in specific social settings, expose them to controlled behaviours of their neighbours, gather behavioral data in controlled conditions, and as an application, help modeling decisions.
  • S2 -
    We want to evaluate and improve a robot's capabilities to navigate in a public space in proximity to humans. The VirtUs simulator is used to immerse a robot in a virtual public space to: generate an infinite number of test scenarios for it, generate automatically annotated sensor data for the learning of tasks related to its navigation, determine safety-critical density thresholds, observe the reactions of subjects also immersed, etc1.
  • S3 -
    We want to evaluate the design of a transportation facility in terms of comfort and safety. The VirtUs simulator is used to immerse real users in virtual facilities to: study the positioning of information signs, measure the related gaze activity, evaluate users' personal experiences in changing conditions, evaluate safety procedures in specific crisis scenarios, evaluate reactions to specific events, etc.

These three scenarios allow us to detail the ambitions of VirtUs, and in particular to define the major challenges that the team wishes to take up:

  • C1 -
    Better capture the characteristics of human motion in complex and variate situations
  • C2 -
    Provide an increased realism in the individual and collective behaviors (from models gathered in C1)
  • C3 -
    Improve the immersion of users, to not only create new user experiences, but to also better capture and understand their behaviors

But it is also stressed through these scenarios and challenges that they cannot be addressed in a concrete way without taking into account the uses made of VirtUs simulators. Scenario S2 for the synthesis of robot sensor data requires that simulated scenes reflect the same characteristics as real scenes (for instance, training a robot to predict human movements requires that virtual character indeed cover all the relevant postures and movements). Scenario S1 and S3 focus on verifying that users have the same information that guides their behavior as in a real situation (for instance, the salience of the virtual scene was consistent with reality and caused users to behave in the same way as in a real situation), etc. Thus, while the nature of the scientific questions that animate the team remains, they are addressed across the spectrum of applications of VirtUs simulators. VirtUs members explore some scientific applications that directly contribute to the VirtUs research program or to connected fields, such as: the study of crowd behaviours, the study of pedestrian behaviour, and virtual cinematography.

3.2 Research Axes

Figure 2

VirtUs research scheme and axes

Figure2: VirtUs research scheme and axes

Figure 2 shows how we articulate the challenges taken up by the team, and how we identify 3 research axes to implement this schema, addressing problems at different scales: the individual virtual human one, the group-level one, and the whole simulator one:

3.2.1 Axis A1 - NextGen Virtual Characters

Summary:

At the individual level, we aim at developing the next generation of virtual humans whose capabilities enable them to autonomously act in immersive environment and interact with immersed users in a natural, believable and autonomous way.

Vision:

Technology is today existing to generate virtual characters with a unique visual appearance (body-shape, skin color, hairstyle, clothing, etc). But it still requires a large - programming or designing - effort to individualize their motion style, to give them traits based on the way they move, or to make them behaving in consistent way across a large set of situations. Our vision is that individualization of motion should be as easy as appearance tuning. This way, virtual characters could for example convey their personality or their emotional state by the way they move, or adapt to new scenarios. Unlike other approaches which rely on always larger datasets to extend characters motion capabilities, we prefer exploring procedural techniques to adjust existing motions.

Long term scientific objective:

Axis A1 addresses the challenge C1 to better capture the characteristics of human motion in variate situations, as well as C2 to increase the realism of individual behaviour. Our expectation is that the user will have access to realistic immersive experience of the virtual world (ecological) by setting the spotlight on VirtUs characters that populate them. The objective is to bring characters to life, to be our digital alter-egos by the way they move and behave in their environment and by the way they interact with us. Our Grail is the perceived realism of characters motion, which means that users can understand the meaning of characters actions, interpret their expressions or predict their behaviors, just like they do with other real humans. In the long term, this objective raises the questions of characters non-verbal communication capabilities, including multimodal interactions between characters and users - such as touch (haptic rendering of character contacts) or audio (character reaction to user sounds) - as well as adaptation of the character's behaviors to his morphology, physiology or psychology, allowing for example to vary the weight, age, height or emotional state of a character with all the adaptations that follow. Finally, our goal is to avoid the need for constant expansion of motion databases capturing these variations, which requires expensive hardware and effort to achieve, and instead bring them procedurally to existing motions.

3.2.2 Axis A2 - Populated Dynamic Worlds

Summary:

Populated Dynamic Worlds: at the group or crowd level, we aim at developing techniques to design populated environments (i.e., how to define and control the activity of large numbers of virtual humans) as well as to simulate them (i.e., enable them to autonomously interact together, perform group actions and collective behaviours)

Vision:

Designing immersive populated environments require bringing to life many characters, basically meaning that thousands of animation trajectory must be generated, and eventually in real-time to ensure interaction capabilities. Our vision is that, while automating the animation process is obviously required here to manage complexity, there is as well an absolute need for intelligent authoring tools to let designers controlling, scripting or devising, the higher level aspects of populated scenes: exhibited population behaviours, density, reactions to events, etc. In ten years from now, we aim for tools that combine authoring and simulation aspects of our problem, and let designers work in the most intuitive manner. Typically working from a “palette” of examples of typical populations and behaviours, we are aiming for virtual population design processes which are as simple as “I would like to populate my virtual railway station in the style of the New York central station at rush hour (full of commuters), or in the style of Paris Montparnasse station on a week-end (with more families, and different kind of shopping activities)”.

Long term scientific objective:

Axis A2 tackles the challenge of building and conveying lively, complex, realistic virtual environments populated by many characters. It addresses the challenges C1 and C2 (extending to complex and collective situations in comparison with the previous axis). Our objective is twofold. First, we want to simulate the motion of crowds of characters. Second, we want to deliver tools for authoring crowded scenes. At the interface of these two objectives, we also explore the question of animating crowds, i.e., generating full-body animations for crowd characters (crowd simulation works with abstraction levels where bodies are not represented). We target a high-level of visual realism. To this end, our objective is to be capable of simulating and authoring virtual crowds so that they resemble real ones: e.g., working from examples of crowd behaviours and tuning simulation and animations processes accordingly. We also want to progressively erase the distinction that exists today between crowd simulation and crowd animation. This is critical in the treatment of complex, body-to-body interactions. By setting this objective, we keep the application field for our crowd simulation techniques wide open, as we both satisfy the needs for visual realism of entertainment applications, as well as the need for data assimilation capabilities for real crowd management kind of applications.

3.2.3 Axis A3 - New Immersive Experiences

Summary:

New immersive experiences: in an application framework, we aim at devising relevant evaluation metrics and demonstrating the performance of VirtUs simulations and as well as compliance and transferability of new immersive experiences to reality

Vision:

We have highlighted through some scenarios the high potential of VirtUs simulators to provide new immersive experiences, and to reach new horizons in terms of scientific or technological applications. Our vision is that new immersive capabilities, and especially new kinds of immersive interactions with realistic groups of virtual characters, can lead to radical changes in various domains, such as the experimental process to study human behaviour in fields like socio-psychology, or the one of entertainment, to tell stories across new medias. In ten years from now, immersive simulators like VirtUs ones will have demonstrated their capacities to reach new levels of realism to open such possibilities, offering experiences where one can perceive the context in which immersive experience take place, can understand and interpret characters' actions happening around him, and can get his belief catched by the ongoing story conveyed by the simulated world.

Long term scientific objective:

Axis A3 addresses the challenge of designing novel immersive experiences, which builds on the innovations from the first two research axes to design VirtUs simulators placing real users in close interaction with virtual characters. We design our immersive experiences with two scientific objectives in mind. Our first objective is to observe users in new generation of immersive experiences where they will move, behave and interact as in the normal life, so that observations made in VirtUs simulators will enable us to study increasingly more complex and ecological situations. This could be a step change in the technologies to study human behaviours, that we apply to our own research objects developed in Axes 1 & 2. Our second objective is to benefit from this face-to-face interaction with VirtUs characters to evaluate their capabilities of presenting more subtle behaviors (e.g., reactivity, expressiveness), to improve immersion protocols. But evaluation methodologies are to be invented. Long term perspectives also encompass to better understand how immersive contents are perceived, not only at the low-level of image saliency (to compose scenes and contents more visually perceptible), but also at higher levels related to cognition and emotion (to compose scenes and contents with a meaning and an emotional impact). By embracing a broader vision, we expect in the future the interactive design of more compelling and enjoyable user experiences.

4 Application domains

In this section we detail how each research axis of the VirtUs team contributes to different application areas. We have identified the directly related disciplines, which we detail in the subsections below.

4.1 Computer Graphics, Computer Animation and Virtual Reality

Our research program aims at increasing the action and reaction capabilities of the virtual characters that populate our immersive simulations. Therefore, we contribute to techniques that enable us to animate virtual characters, to develop their autonomous behaviour capabilities, to control their visual representations, but also, more related to immersive applications, to develop their interaction capabilities with a real user, and finally to adapt all these techniques to the limited computational time budget imposed by a real time, immersive and interactive application. These contributions are at the interface of computer graphics, computer animation and virtual reality.

Our research also aims at proposing tools to stage a set of characters in relation to a specific environment. Our contributions aim at making intuitive the tasks of scene creation while ensuring an excellent coherence between the visible behaviour of the characters and the expected actions in the environment in which they evolve. These contributions have applications in the field of computer animation.

4.2 Cinematography and Digital Storytelling

Our research targets the understanding of complex movie sequences, and more specifically how low-and-high level features are spatially and temporally orchestrated to create a coherent and aesthetic narrative. This understanding then feeds computational models (through probabilistic encodings but also by relying on learning techniques), which are designed with the capacity to regenerate new animated contents, though automated or interactive approaches, e.g., exploring sylistic variations. This finds applications in 1) the analysis of film/TV/Broadcast contents, augmenting the nature and range of knowledge potentially extracted from such contents, with interest from major content providers (Amazon, Netflix, Disney) and 2) the generation of animated contents, with interest from animation companies, film previsualisation, or game engines.

Furthermore, our research focusses on the extraction of lighting features from real or captures scenes, and the simulation of these lightings in new virtual or augmented reality contexts. The underlying scientific challenges are complex and related to the understanding from images where lights are, how they interact with the scene and how light propagate. Applications in understanding this light staging, and reproducing it in virtual environments find many applications in the film and media industries, which want to seamlessly integrate virtual and real contents together through spatially and temporally coherent light setups.

4.3 Human motion, Crowd dynamics and Pedestrian behaviours

Our research program contributes to crowd modelling and simulation, including pedestrian simulation. Crowd simulation has various applications ranging from entertainment (visual effects for cinema, animated and interactive scenes for video games) to architecture and security (dimensioning of places intended to receive the public, flow management). Our research program also aims to immerse users in these simulations in order to study their behaviour, opening the way to applications for research on human behaviour, including human movement sciences, or the modeling of pedestrians and crowds.

4.4 Psychology and Perception

One important dimension of our research consists in identifying and measuring what users of our immersive simulators are able to perceive of the virtual situations. We are particularly interested in understanding how the content presented to them is interpreted, how they react to different situations, what are the elements that impact their reactions as well as their immersion in the virtual word, and how all these elements differ from real situations. Such challenges are directly related to the fields of Psychology and Perception.

5 Highlights of the year

  • Katja Zibrek has joined the VirtUs team as a permanent member. She joins Inria as an ISFP.
  • VirtUs organized the 16th ACM Conference on Motion, Interaction en Games (ACM MIG 2023) in Rennes, november 15th-17th. The conference hosted around 80 participants.

5.1 Awards

  • Xi Wang, former VirtUs PhD student is the awardee of the 2nd PhD thesis prize of the Fondation Rennes 1 in Mathematics and Computer Science for his thesis on the subject : "Robustness of visual SLAM techniques to light changing conditions".
  • Xi Wang, former VirtUs PhD student, and Marc Christie received the conference Best Paper Award for “Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate Fighting” at ACM MIG 2023.
  • Alexis Jensen received the Best Presentation Award at the ACM MIG 2023 conference for his paper “Physical Simulation of Balance Recovery after a Push”

6 New results

6.1 NextGen Virtual Characters

6.1.1 Real-time Multi-map Saliency-driven Gaze Behavior for Non-conversational Characters

Participants: Ludovic Hoyet [contact], Julien Pettré, Marc Christie, Kadi Bouatouch, Anne-Hélène Olivier.

Figure 3

The flowchart of our method describing the five-step process (numbered blocks): 1) Rendering the image in the character's field of view. 2) The image of the scene is passed through a visual saliency model, which outputs an eye-fixation probability for each pixel. 3) The predicted saliency map is combined with several human oculomotor biases, and merged into a fixation distribution. 4) The position and the duration of the fixation are randomly sampled, using the computed spatial fixation distribution and predetermined fixation duration distribution. 5) Given the character's current eye and head orientations, its gaze is animated toward the new fixation point. When the duration of the fixation is reached, the process is reiterated from step 1.

Figure3: The flowchart of our method describing the five-step process (numbered blocks): 1) Rendering the image in the character's field of view. 2) The image of the scene is passed through a visual saliency model, which outputs an eye-fixation probability for each pixel. 3) The predicted saliency map is combined with several human oculomotor biases, and merged into a fixation distribution. 4) The position and the duration of the fixation are randomly sampled, using the computed spatial fixation distribution and predetermined fixation duration distribution. 5) Given the character's current eye and head orientations, its gaze is animated toward the new fixation point. When the duration of the fixation is reached, the process is reiterated from step 1.

Gaze behavior of virtual characters in video games and virtual reality experiences is a key factor of realism and immersion. Indeed, gaze plays many roles when interacting with the environment; not only does it indicate what characters are looking at, but it also plays an important role in verbal and non-verbal behaviors and in making virtual characters alive. Automated computing of gaze behaviors is however a challenging problem, and to date none of the existing methods are capable of producing close-to-real results in an interactive context. We therefore propose a novel method that leverages recent advances in several distinct areas related to visual saliency, attention mechanisms, saccadic behavior modelling, and head-gaze animation techniques. Our approach articulates these advances to converge on a multi-map saliency-driven model which offers real-time realistic gaze behaviors for non-conversational characters, together with additional user-control over customizable features to compose a wide variety of results. We first evaluate the benefits of our approach through an objective evaluation that confronts our gaze simulation with ground truth data using an eye-tracking dataset specifically acquired for this purpose. We then rely on subjective evaluation to measure the level of realism of gaze animations generated by our method, in comparison with gaze animations captured from real actors. Our results show that our method generates gaze behaviors that cannot be distinguished from captured gaze animations. Overall, we believe that these results will open the way for more natural and intuitive design of realistic and coherent gaze animations for real-time applications. Our approach is illustrated in Figure 3 and our results are reported in 8.

6.1.2 Warping character animations using visual motion features

Participants: Julien Pettré [contact], Alberto Jovane, Ludovic Hoyet, Anne-Hélène Olivier, Katja Zibrek, Marc Christie.

Figure 4

Overview of our approach. From an input sequence of a character animation, we first estimate different visual motion features on the current pose, considering the environment, the observer's viewpoint, and a visual target (blue). Then, multiple plausible motion modifications are computed manipulating warping units (yellow), and the ones that minimise the visual error between the current state and the target are applied to the output motion. This process is repeated for the whole motion over a control loop (red).

Figure4: Overview of our approach. From an input sequence of a character animation, we first estimate different visual motion features on the current pose, considering the environment, the observer's viewpoint, and a visual target (blue). Then, multiple plausible motion modifications are computed manipulating warping units (yellow), and the ones that minimise the visual error between the current state and the target are applied to the output motion. This process is repeated for the whole motion over a control loop (red).

Despite the wide availability of large motion databases, and recent advances in exploiting them to create tailored contents, artists still spend tremendous efforts in editing and adapting existing character animations to new contexts. This motion editing problem is a well-addressed research topic. Yet only a few approaches have considered the influence of the camera angle, and the resulting visual features it yields, as a mean to control and warp a character animation. This paper proposes the design of viewpoint-dependent motion warping units that perform subtle updates on animations through the specification of visual motion features such as visibility, or spatial extent. We express the problem as a specific case of visual servoing, where the warping of a given character motion is regulated by a number of visual features to enforce. Results demonstrate the relevance of the approach for different motion editing tasks and its potential to empower virtual characters with attention-aware communication abilities. Our approach is illustrated in Figure 4 and our results are reported in 9.

6.1.3 Physical Simulation of Balance Recovery after a Push

Participants: Julien Pettré [contact], Alexis Jensen, Thomas Chatagnon.

Figure 5

Experiment and simulation of balance recovery after a push with a pole

Figure5: Experiment and simulation of balance recovery after a push with a pole

Our goal is to simulate how humans recover balance after external perturbation, e.g., being pushed. While different strategies can be adopted to achieve balance recovery, we particularly aim at replicating how humans combine the control of their support area with the control of their body movement to regain balance when it is necessary. We develop a physics-based approach to simulate balance recovery, with two main contributions to achieve our goal: a foot control technique to adjust the shape of a character's support zone to the motion of its center of mass (CoM), and the dynamic control of the CoM to maintain its vertical projection in this same zone. We also calibrate the simulation by optimisation, before validating our results against experimental data. Calibration data as well as simulation results are illustrated in Figure 5. Our results are reported in 12. This work was performed in collaboration with Charles Pontonnier from the MimeTIC team.

6.1.4 Exploring the Perception of Center of Mass changes for VR Avatars

Participants: Ludovic Hoyet [contact].

Figure 6

3D models of the 8 actors used the experiments conducted in 15, depicting walking and jogging motions. The models represent varying body mass indices (BMI), with half in the high BMI group and the other half with low BMI. Left: High BMI M/F and Low BMI M/F walking; Right: High BMI M/F and Low BMI M/F jogging.

Figure6: 3D models of the 8 actors used the experiments conducted in 15, depicting walking and jogging motions. The models represent varying body mass indices (BMI), with half in the high BMI group and the other half with low BMI. Left: High BMI M/F and Low BMI M/F walking; Right: High BMI M/F and Low BMI M/F jogging.

Populating Virtual Environments with animated virtual characters often involves retargeting motions to 3D body models with differing shapes. A user’s avatar, for example, should move in a way that is consistent with their model’s body shape in order to maintain the sense of presence. In this work 15 performed in collaboration with Bharat Vyas and Carol O'Sullivan (Trinity College Dublin) in the context of the ITN CLIPE European project, we conducted a set of perception experiments to explore how motions captured from actors with various body mass indices (BMI) are perceived, when they are retargeted to characters with different BMIs. We also explored the perceptual effects of retargeting average and physics-based motions. To explore the latter, we devised a physics-based controller framework that utilizes motion, target body weight, and height as inputs to generate retargeted motions. Despite the controller generating varied motions for various body shapes, average motions consistently outperformed the controllergenerated motions in terms of naturalness. Overall, this work highlights an anthropometric based physics controller and a novel approach for perceptual evaluation of human motion retargeting for virtual characters.

6.2 Populated Dynamic Worlds

6.2.1 GREIL-Crowds: Crowd Simulation with Deep Reinforcement Learning and Examples

Participants: Julien Pettré [contact].

Figure 7

Overview of GREIL-Crowds. (top) At the pre-processing stage, spatio-temporal trajectories of real world crowds are extracted from videos. These are then processed to find agent centric states and actions, and to define a data-driven reward function R(s,a,s'). (bottom) During training, R(s,a,s') is used by the Double Deep Q-Learning algorithm to find the optimal policy function for the agents. A replay buffer memory combined with a data-driven training strategy are crucial elements in achieving efficient policies.

Figure7: Overview of GREIL-Crowds. (top) At the pre-processing stage, spatio-temporal trajectories of real world crowds are extracted from videos. These are then processed to find agent centric states and actions, and to define a data-driven reward function R(s,a,s'). (bottom) During training, R(s,a,s') is used by the Double Deep Q-Learning algorithm to find the optimal policy function for the agents. A replay buffer memory combined with a data-driven training strategy are crucial elements in achieving efficient policies.

Simulating crowds with realistic behaviors is a difficult but very important task for a variety of applications. Quantifying how a person balances between different conflicting criteria such as goal seeking, collision avoidance and moving within a group is not intuitive, especially if we consider that behaviors differ largely between people. Inspired by recent advances in Deep Reinforcement Learning, in 6, we propose Guided REinforcement Learning (GREIL) Crowds, a method that learns a model for pedestrian behaviors which is guided by reference crowd data, as illustrated in Figure 7. The model successfully captures behaviors such as goal seeking, being part of consistent groups without the need to define explicit relationships and wandering around seemingly without a specific purpose. Two fundamental concepts are important in achieving these results: (a) the per agent state representation and (b) the reward function. The agent state is a temporal representation of the situation around each agent. The reward function is based on the idea that people try to move in situations/states in which they feel comfortable in. Therefore, in order for agents to stay in a comfortable state space, we first obtain a distribution of states extracted from real crowd data; then we evaluate states based on how much of an outlier they are compared to such a distribution. We demonstrate that our system can capture and simulate many complex and subtle crowd interactions in varied scenarios. Additionally, the proposed method generalizes to unseen situations, generates consistent behaviors and does not suffer from the limitations of other data-driven and reinforcement learning approaches.

6.2.2 Reward Function Design for Crowd Simulation via Reinforcement Learning

Participants: Julien Pettré [contact].

Figure 8

(Left) Success rates of agents trained with certain reward functions in the Circle scenario. (Right) Energy+ metric as a function of training progress with various reward functions. To maintain the performance from the first stage of the training, it is necessary to either use a potential term, or set the discount factor to gamma = 1. Agents without a potential or a final heuristic converge to standing still, while other variants' performance significantly degrades.

Figure8: (Left) Success rates of agents trained with certain reward functions in the Circle scenario. (Right) Energy+ metric as a function of training progress with various reward functions. To maintain the performance from the first stage of the training, it is necessary to either use a potential term, or set the discount factor to gamma = 1. Agents without a potential or a final heuristic converge to standing still, while other variants' performance significantly degrades.

Crowd simulation is important for video-games design, since it enables to populate virtual worlds with autonomous avatars that navigate in a human-like manner. Reinforcement learning has shown great potential in simulating virtual crowds, but the design of the reward function is critical to achieving effective and efficient results. In this work, we explore the design of reward functions for reinforcement learning-based crowd simulation. We provide theoretical insights on the validity of certain reward functions according to their analytical properties, and evaluate them empirically using a range of scenarios, using the energy efficiency as the metric. Our experiments show that directly minimizing the energy usage is a viable strategy as long as it is paired with an appropriately scaled guiding potential, and enable us to study the impact of the different reward components on the behavior of the simulated crowd. Our findings can inform the development of new crowd simulation techniques, and contribute to the wider study of human-like navigation. This work is performed in collaboration with LIX laboratory of Ecole Polytechnique. Results are iullustrated in Figure 8. Results are also reported in 13.

6.3 New Immersive Experiences

6.3.1 Avoiding virtual humans in a constrained environment: Exploration of novel behavioural measures

Participants: Katja Zibrek [contact], Julien Pettré, Ludovic Hoyet, Anne-Hélène Olivier, Yuliya Patotskaya.

Figure 9

Metro scene from our experiment with virtual characters: “Cooper” (left) expressing the neurotic personality with body and facial motion, and “Yuri” (right) expressing the emotionally stable personality.

Figure9: Metro scene from our experiment with virtual characters: “Cooper” (left) expressing the neurotic personality with body and facial motion, and “Yuri” (right) expressing the emotionally stable personality.

In computer animation, the creation of believable and engaging virtual characters has been a long-lasting endeavour. While researchers investigated several aspects of character design, not many studies focused on the qualities of biological human motion itself. We approached the perception of motion from the perspective of distinct movement patterns which can be observed on people with neurotic and emotionally stable personality traits. We designed an experiment in virtual reality, using a photo-realistic metro scenario, where we studied the avoidance behaviour of participants when encountering these two types of virtual characters in a constrained environment. We also make a contribution by successfully implementing two behavioural measures in particular: a choice task, and a novel ‘turning point’ metric, which calculates the point in the trajectory when people turned to avoid the character. Our results indicate that users’ behaviour is affected by character’s motion and we propose the use of these behavioural measures to investigate other aspects of character motion in future research. Experimental situation is illustrated in Figure 9. Our results are reported in 11.

6.3.2 The Stare-in-the-Crowd Effect When Navigating a Crowd in Virtual Reality

Participants: Anne-Hélène Olivier [contact], Julien Pettré, Ludovic Hoyet, Katja Zibrek.

Figure 10

Participants navigate a virtual crowd (left). We evaluated the effect of virtual agents gaze behaviour, looking or not at the participants (middle), on their gaze and locomotor behaviour (right - the colors depict which virtual agent was looked at).

Figure10: Participants navigate a virtual crowd (left). We evaluated the effect of virtual agents gaze behaviour, looking or not at the participants (middle), on their gaze and locomotor behaviour (right - the colors depict which virtual agent was looked at).

In computer animation, the creation of believable and engaging virtual characters has been a long-lasting endeavour. While researchers investigated several aspects of character design, not many studies focused on the qualities of biological human motion itself. We approached the perception of motion from the perspective of distinct movement patterns which can be observed on people with neurotic and emotionally stable personality traits. We designed an experiment in virtual reality, using a photo-realistic metro scenario, where we studied the avoidance behaviour of participants when encountering these two types of virtual characters in a constrained environment. We also make a contribution by successfully implementing two behavioural measures in particular: a choice task, and a novel ‘turning point’ metric, which calculates the point in the trajectory when people turned to avoid the character. Our results indicate that users’ behaviour is affected by character’s motion and we propose the use of these behavioural measures to investigate other aspects of character motion in future research. Experimental situation is illustrated in Figure 10. Our results are reported in 14.

In order to create new immersive experiences, a part of our work was dedicated in the understanding the effect of individual and external factors on human behaviour. It is an important step to design realistic virtual humans. The last sections detail our results on the effect of low back pain on locomotion strategies as well as the effect of a physical perturbation on standing balace and stepping strategies.

6.3.3 To Stick or Not to Stick? Studying the Impact of Offset Recovery Techniques During Mid-Air Interactions

Participants: Ludovic Hoyet [contact].

Figure 11

Illustration of the contact release methods evaluated in 10. (1-3) Contact phase: when a virtual contact between the user's hand and a virtual object occurs, a spatial offset is generated between the real user's body (transparent) and its virtual counterpart (opaque) to avoid interpenetration, keeping the later at the level of the virtual object surface. (4-6) Release phase: two explored alternatives to recover the offset generated. Top: in the “sticky” approach, the virtual hand is constrained on the virtual surface until the offset is recovered. Bottom: our proposed “unsticky” technique uses an adaptive offset recovery control law that provides an instantaneous contact release at the expense of a longer offset recovery phase.

Figure11: Illustration of the contact release methods evaluated in 10. (1-3) Contact phase: when a virtual contact between the user's hand and a virtual object occurs, a spatial offset is generated between the real user's body (transparent) and its virtual counterpart (opaque) to avoid interpenetration, keeping the later at the level of the virtual object surface. (4-6) Release phase: two explored alternatives to recover the offset generated. Top: in the “sticky” approach, the virtual hand is constrained on the virtual surface until the offset is recovered. Bottom: our proposed “unsticky” technique uses an adaptive offset recovery control law that provides an instantaneous contact release at the expense of a longer offset recovery phase.

During mid-air interactions, common approaches (such as the god-object method) typically rely on visually constraining the user's avatar to avoid visual interpenetrations with the virtual environment in the absence of kinesthetic feedback. In this work 10 performed in collaboration with Ferran Argelaguet and Anatole Lécuyer (Hybrid team), we explored two methods which influence how the position mismatch (positional offset) between users' real and virtual hands is recovered when releasing the contact with virtual objects. The first method (sticky) constrains the user's virtual hand until the mismatch is recovered, while the second method (unsticky) employs an adaptive offset recovery method. In the first study, we explored the effect of positional offset and of motion alteration on users' behavioral adjustments and users' perception. In a second study, we evaluated variations in the sense of embodiment and the preference between the two control laws. Overall, both methods presented similar results in terms of performance and accuracy, yet, positional offsets strongly impacted motion profiles and users' performance. Both methods also resulted in comparable levels of embodiment. Finally, participants usually expressed strong preferences toward one of the two methods, but these choices were individual-specific and did not appear to be correlated solely with characteristics external to the individuals. Taken together, these results highlight the relevance of exploring the customization of motion control algorithms for avatars.

6.3.4 Locomotion behavior of chronic Non-Specific Low Back Pain (cNSLBP) participants while walking through apertures

Participants: Anne-Hélène Olivier [contact], Armel Crétual, Mathieu Ménard, Agathe Bilhaut.

Figure 12

Illustration of the experimental task we have designe to study the effect of chronic non specific low back pain on action strategies during locomotion.

Figure12: Illustration of the experimental task we have designe to study the effect of chronic non specific low back pain on action strategies during locomotion.

Chronic Non-Specific Low Back Pain (cNSLBP) has been identified as one of the leading global causes of disability and is characterized by symptoms without clear patho-anatomical origin. The majority of clinical trials assess cNSLBP using scales or questionnaires, reporting an influence of cognitive, emotional and behavioral factors. However, few studies have explored the effect of chronic pain in daily life tasks such as walking and avoiding obstacles, which involves perceptual-motor processes to interact with the environment. In this project, we were wondering whether action strategies in a horizontal aperture crossing paradigm are affected by cNSLBP and which factors influence these decisions. To answer those questions, we designed an experiment involving 15 asymptomatic adults (AA) and 15 cNSLBP participants 5. They were asked to walk along a 14 m long path, crossing through apertures ranging from 0.9 to 1.8 times their shoulder width. Their movement was measured using the Qualisys system, and pain perception was evaluated by self-administered questionnaires. Our results showed that the cNSLBP participants stopped rotating their shoulders for a smaller aperture relative to their shoulder width (1.18) than the AA participants (1.33). In addition, these participants walked slower, which gave them more time to make the movement adaptations necessary to cross the aperture. No correlation was found between the variables related to pain perception and the critical point but the levels of pain were low with a small variability. This study shows that during a horizontal aperture crossing task requiring shoulder rotation to pass through small apertures, cNSLBP participants appear to exhibit a riskier adaptive strategy than AA participants by minimizing rotations that could induce pain. This task thus makes it possible to discriminate between cNSLBP participants and pain-free participants without measuring the level of pain.

6.3.5 Stepping strategies of young adults undergoing sudden external perturbation from different directions

Participants: Julien Pettré [contact], Charles Pontonnier, Ludovic Hoyet, Anne-Hélène Olivier, Thomas Chatagnon.

Figure 13

Illustration of the experimental conditions we studied

Figure13: Illustration of the experimental conditions we studied

Stepping strategies following external perturbations from different directions is investigated in this work. We analysed the effect of the perturbation angle as well as the level of awareness of individuals and characterised steps out of the sagittal plane between Loaded Side Steps (LSS), Unloaded Medial Steps (UMS) and Unloaded Crossover Steps (UCS). A novel experimental paradigm involving perturbations in different directions was performed on a group of 21 young adults (10 females, 11 males, 20–38 years). Participants underwent 30 randomised perturbations along 5 different angles, as illustarted in Figure 13 with different levels of awareness of the upcoming perturbations (with and without wearing a sensory impairment device) for a total of 1260 recorded trials. Results showed that logistic models based on the minimal values of the Margin of Stability (MoS) or on the minimal values of the Time to boundary (Ttb) performed the best in the sagittal plane. However, their accuracy stayed above 79% regardless of the perturbation angle or level of awareness. Regarding the effect of the experimental condition, evidences of different balance recovery behaviours due to the variation of perturbation angles were exposed, but no significant effect of the level of awareness was observed. Finally, we proposed the Distance to Foot boundary (DtFb) as a relevant quantity to characterise the stepping strategies in response to perturbations out of the sagittal plane. Our results are reported in 7. This work has been performed in collaboration with Charles Pontonnier from the MimeTIC team.

6.3.6 Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate Fighting

Participants: Marc Christie [contact], Anthony Mirabile, Xi Wang.

The capacity to capture and broadcast sports events with close to real-time volumetric reconstruction techniques opens exciting perspectives in how audiences can consume and interact with these contents. In this work, we propose the design of an real-time cinematography system that is capable of generating qualitative framing and editing of volumetric-captured content, by mimicking real broadcast footage. To illustrate our approach, we focus on the specific problem of cinematography for ring-based events such as Ultimate Fighting Championships (UFC). We start by extracting statistical features from hours of real footage to understand the specific framing and cutting behaviors of real broadcast directors. We then exploit these features in a real-time editing system, not only to replicate the behaviors in existing broadcasting, but also to generalize to novel camera layouts. The paper was awarded the best conference paper at Motion in Games 2023.

6.3.7 JAWS: Just A Wild Shot for Cinematic Transfer in Neural Radiance Fields

Participants: Marc Christie [contact].

In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at this link. The work was published in Behavior Research Methods.

6.3.8 High-level cinematic knowledge to predict inter-observer visual congruency

Participants: Marc Christie [contact].

When watching the same visual stimulus, humans can exhibit a wide range of gaze behaviors. These variations can be caused by bottom-up factors (i.e. features of the stimulus itself) or top-down factors (i.e. characteristics of the observers). Inter-observer visual congruency is a measure of this range. Moreover, it has been shown that cinematic techniques, such as camera motion or shot editing, have a significant impact on this measure. In this work, we first propose a metric for measuring IOC in videos, taking into account the dynamic nature of the stimuli. Then, we propose a model for predicting inter-observer visual congruency in the context of feature films, by using high-level cinematic annotation as prior information in a deep learning framework. See 17 for more details.

6.3.9 Where to look at the movies: Analyzing visual attention to understand movie editing

Participants: Marc Christie [contact], Xi Wang.

We designed an optimzation-driven approach that achieves the robust transfer of visual cinematic features from a reference in-the-wild video clip to a newly generated clip. To this end, we rely on an implicit-neural-representation (INR) in a way to compute a clip that shares the same cinematic features as the reference clip. We propose a general formulation of a camera optimization problem in an INR that computes extrinsic and intrinsic camera parameters as well as timing. By leveraging the differentiability of neural representations, we can back-propagate our designed cinematic losses measured on proxy estimators through a NeRF network to the proposed cinematic parameters directly. We also introduce specific enhancements such as guidance maps to improve the overall quality and efficiency. Results display the capacity of our system to replicate well known camera sequences from movies, adapting the framing, camera parameters and timing of the generated video clip to maximize the similarity with the reference clip. See 16 for more details.

6.3.10 BluNF: Blueprint Neural Field

Participants: Marc Christie [contact], Xi Wang.

Neural Radiance Fields (NeRFs) have revolutionized scene novel view synthesis, offering visually realistic, precise, and robust implicit reconstructions. While recent approaches enable NeRF editing, such as object removal, 3D shape modification, or material property manipulation, the manual annotation prior to such edits makes the process tedious. Additionally, traditional 2D interaction tools lack an accurate sense of 3D space, preventing precise manipulation and editing of scenes. In this paper, we introduce a novel approach, called Blueprint Neural Field (BluNF), to address these editing issues. BluNF provides a robust and user-friendly 2D blueprint, enabling intuitive scene editing. By leveraging implicit neural representation, BluNF constructs a blueprint of a scene using prior semantic and depth information. The generated blueprint allows effortless editing and manipulation of NeRF representations. We demonstrate BluNF's editability through an intuitive click-and-change mechanism, enabling 3D manipulations, such as masking, appearance modification, and object removal. Our approach significantly contributes to visual content creation, paving the way for further research in this area.

6.3.11 Contact-conditioned hand-held object reconstruction from single-view images

Participants: Marc Christie [contact], Xiaoyuan Wang.

Reconstructing the shape of hand-held objects from single-view color images is a long-standing problem in computer vision and computer graphics. The task is complicated by the ill-posed nature of single-view reconstruction, as well as potential occlusions due to both the hand and the object. Previous works mostly handled the problem by utilizing known object templates as priors to reduce the complexity. In contrast, our paper proposes a novel approach without knowing the object templates beforehand but by exploiting prior knowledge of contacts in hand-object interactions to train an attention-based network that can perform precise hand-held object reconstructions with only a single forward pass in inference. The network we propose encodes visual features together with contact features using a multi-head attention module as a way to condition the training of a neural field representation. This neural field representation outputs a Signed Distance Field representing the reconstructed object and extensive experiments on three well-known datasets demonstrate that our method achieves superior reconstruction results even under severe occlusion compared to the state-of-the-art.

7 Bilateral contracts and grants with industry

7.1 Bilateral contracts with industry

Cifre InterDigital - Learning-Based Human Character Animation Synthesis for Content Production

Participants: Ludovic Hoyet [contact], Lucas Mourot.

The overall objective of the PhD thesis of Lucas Mourot, which started in June 2020 and was defended in May 2023, was to adapt and improve the state-of-art on video animation and human motion modelling to develop a semi-automated framework for human animation synthesis that brings real benefits to artists in the movie and advertising industry. In particular, one objective was to leverage the use of novel learning-based approaches, in order to propose skeleton-based animation representations, as well as editing tools, in order to improve the resolution and accuracy of the produced animations, so that automatically synthetized animations might become usable in an interactive way by animation artists.

Cifre InterDigital - Deep-based semantic representation of avatars for virtual reality

Participants: Ludovic Hoyet [contact], Philippe De Clermont Gallerande.

The overall objective of the PhD thesis of Philippe De Clermont Gallerande, which started in February 2023, is to explore novel approaches (including both full body and facial elements) to enable both full body and facial encoding and decoding for multi-user immersive experiences. Objective is also to enable the evaluation of the quality of experience. More specifically, one of the focus is to propose solutions to represent digital characters (avatars) with semantic-based approaches in a context of multi-user immersive telepresence, that are compact, plausible and simulatenously resiliant to data perturbation caused by streaming. This PhD is conducted within the context of the joint laboratory Nemo.ai between Inria and InterDigital, and more specifically within the Ys.ai project which is dedicated to exploring novel research questions and applications in Virtual Reality. This work is also conducted in collaboration between the two Inria teams Hybrid and Virtus, as well as with the Interactive Media team of InterDigital.

Cifre InterDigital - Facial features from high quality cinema footage

Participants: Marc Christie [contact], Kelian Baert.

The overall objective of the PhD thesis of Kelian Baert funded by VFX company Mikros Image is to explore novel representations to extract facial features from high quality cinema footage, and provide intuitive techniques to perform facial editing including shape, appearance and animation. More precisely, we search to improve the controllability of learning-based techniques techniques for editing photo-realistic faces in video sequences, aimed at the visual effects for cinema. The aim is to accelerate post-production processes on faces by enabling an artist to finely control different characteristics over time. There are numerous applications: rejuvenation and aging, make-up/tattooing, strong modifications morphology (adding a 3rd eye, for example), replacing an understudy with the actor's face by the actor's face, adjustments to the actor's acting. The PhD will rely on a threefold approach: transfer of features from real to synthetic, editing in the synthetic domain on simplified representations, and then to transfer the contents back to photorealistic sequences using GAN/Diffusion-based models to ensure visual quality.

LCPP (PhD contract) - Immersive crowd simulation for the study and design of public spaces

Participants: Julien Pettré [contact], Ludovic Hoyet, Jordan Martin.

The overall objective of the PhD thesis of Jordan Martin, started in November 2022, is to explore the use of Virtual Reality to better design and assess public spaces. The VirtUs team specialises in the simulation, animation and immersion of virtual crowds. The aim of this thesis is to explore new ways of analysing crowd behaviours in real environments using new virtual reality technologies that allow users to be directly immersed in digital replicas of these situations. More specifically, Jordan explores the technical conditions of Virtual Reality experiments that lead to collecting realistic data. These conditions revolve around the visual representations of virtual humans, as well as display conditions.

7.2 Bilateral Grants with Industry

Unreal MegaGrant

Participants: Marc Christie [contact], Anthony Mirabile.

The objective of the Unreal Megagrant (70k€) is to initiate the development of Augmented Reality techniques as animation authoring tools. While we have demonstrated the benefits of VR as a relevant authoring tool for artists, we would like to explore with this funding the animation capacities of augmented reality techniques. Underlying challenges are pertained to the capacity of estimating precisely hand poses for fine animation tasks, but also exploring means of high-level authoring tools such as gesturing techniques.

8 Partnerships and cooperations

8.1 International research visitors

8.1.1 Visits of international scientists

Ioannis Karamouzas
  • Status
    researcher
  • Institution of origin:
    University of California, Riverside
  • Country:
    U.S.A.
  • Dates:
    September 9-23 and November 20-30, 2023
  • Context of the visit:
    Ioannis Karamouzas stayed in the team to initiate long-term collaboration with the team and prepare an application to the Inria International Chair program.
  • Mobility program/type of mobility:
    research stay
Melania Prieto Martin
  • Status
    PhD
  • Institution of origin:
    Universidad Rey Juan Carlos (Madrid)
  • Country:
    Spain
  • Dates:
    October 1st - December 1st, 2023
  • Context of the visit:
    H2020 CrowdDNA research project (URJC is partner)
  • Mobility program/type of mobility:
    research stay
Sina Feldmann
  • Status
    PhD
  • Institution of origin:
    Juelich Research Center (JFZ)
  • Country:
    Germany
  • Dates:
    January 1st - February 28th, 2023
  • Context of the visit:
    H2020 CrowdDNA research project (JFZ is partner)
  • Mobility program/type of mobility:
    research stay
Juliane Adriane
  • Status
    Postdoc
  • Institution of origin:
    Juelich Research Center (JFZ)
  • Country:
    Germany
  • Dates:
    November 1st, 2022 - February 28th, 2023
  • Context of the visit:
    H2020 CrowdDNA research project (JFZ is partner)
  • Mobility program/type of mobility:
    research stay
Thomas Uboldi
  • Status
    PhD Student
  • Institution of origin:
    UQAR
  • Country:
    Canada
  • Dates:
    January 2nd, 2023 - February 28th, 2023
  • Context of the visit:
    Collaboration with the AUDITIF project
  • Mobility program/type of mobility:
    research stay
Katja Zibrek
  • Visited institution
    : University of Barcelona and Virtual Bodyworks Barcelona
  • Country
    : Spain
  • Dates
    : 01/06/2023 - 31/07/2023
  • Context of the visit
    : Collaboration with the research and industry partner on a perceptual experiment in VR.
  • Mobility program/type of mobility:
    H2020-MSCA-IF ForEVR Secondment

8.2 European initiatives

8.2.1 H2020 projects

H2020 MCSA ITN CLIPE

Participants: Julien Pettré [contact], Vicenzo Abichequer Sangalli, Marc Christie, Ludovic Hoyet, Tairan Yin.

  • Title:
    Creating Lively Interactive Populated Environments
  • Duration:
    2020 - 2024
  • Coordinator:
    University of Cyprus
  • Partners:
    • University of Cyprus (CY)
    • Universitat Politecnica de Catalunya (ES)
    • INRIA (FR)
    • University College London (UK)
    • Trinity College Dublin (IE)
    • Max Planck Institute for Intelligent Systems (DE)
    • KTH Royal Institute of Technology, Stockholm (SE)
    • Ecole Polytechnique (FR)
    • Silversky3d (CY)
  • Inria contact:
    Julien Pettré
  • Summary:
    CLIPE is an Innovative Training Network (ITN) funded by the Marie Skłodowska-Curie program of the European Commission. The primary objective of CLIPE is to train a generation of innovators and researchers in the field of virtual characters simulation and animation. Advances in technology are pushing towards making VR/AR worlds a daily experience. Whilst virtual characters are an important component of these worlds, bringing them to life and giving them interaction and communication abilities requires highly specialized programming combined with artistic skills, and considerable investments: millions spent on countless coders and designers to develop video-games is a typical example. The research objective of CLIPE is to design the next-generation of VR-ready characters. CLIPE is addressing the most important current aspects of the problem, making the characters capable of: behaving more naturally; interacting with real users sharing a virtual experience with them; being more intuitively and extensively controllable for virtual worlds designers. To meet our objectives, the CLIPE consortium gathers some of the main European actors in the field of VR/AR, computer graphics, computer animation, psychology and perception. CLIPE also extends its partnership to key industrial actors of populated virtual worlds, giving students the ability to explore new application fields and start collaborations beyond academia.
  • Website:
H2020 FET-Open CrowdDNA

Participants: Julien Pettré [contact], Thomas Chatagnon, Ludovic Hoyet, Alexis Jensen, Anne-Hélène Olivier.

  • Title:
    CrowdDNA
  • Duration:
    2020 - 2024
  • Coordinator:
    Inria
  • Partners:
    • Inria (Fr)
    • ONHYS (FR)
    • University of Leeds (UK)
    • Crowd Dynamics (UK)
    • Universidad Rey Juan Carlos (ES)
    • Forschungszentrum Jülich (DE)
    • Universität Ulm (DE)
  • Inria contact:
    Julien Pettré
  • Summary:
    This project aims to enable a new generation of “crowd technologies”, i.e., a system that can prevent deaths, minimize discomfort and maximize efficiency in the management of crowds. It performs an analysis of crowd behaviour to estimate the characteristics essential to understand its current state and predict its evolution. CrowdDNA is particularly concerned with the dangers and discomforts associated with very high-density crowds such as those that occur at cultural or sporting events or public transport systems. The main idea behind CrowdDNA is that analysis of new kind of macroscopic features of a crowd – such as the apparent motion field that can be efficiently measured in real mass events – can reveal valuable information about its internal structure, provide a precise estimate of a crowd state at the microscopic level, and more importantly, predict its potential to generate dangerous crowd movements. This way of understanding low-level states from high-level observations is similar to that humans can tell a lot about the physical properties of a given object just by looking at it, without touching. CrowdDNA challenges the existing paradigms which rely on simulation technologies to analyse and predict crowds, and also require complex estimations of many features such as density, counting or individual features to calibrate simulations. This vision raises one main scientific challenge, which can be summarized as the need for a deep understanding of the numerical relations between the local – microscopic – scale of crowd behaviours (e.g., contact and pushes at the limb scale) and the global – macroscopic – scale, i.e. the entire crowd.
  • Website:
H2020 MSCA Individual Fellowship

Participants: Katja Zibrek [contact], Julien Pettré.

  • Title:
    ForEVR
  • Duration:
    2021 - 2023
  • Coordinator:
    Inria
  • Secondment placement:
    UB Barcelona, Virtual Bodyworks
  • Inria contact:
    Katja Zibrek
  • Summary:
    The ForEVR project is focused on delivering an improved interaction with another user or virtual character in immersive Embodied Virtual Reality (EVR). EVR offers the possibility of interacting with another in a computer-generated world by using virtual humans, and the interaction can closely resemble real-life, physical interaction. Embodied virtual interaction creates a powerful illusion of presence with another and is a valuable candidate for the alternative to physical meetings. A strikingly overlooked problem of such interactive systems is the representation of humans in virtual environments. Current designs are driven by commercial models, which are frequently subjected to bias and are not appropriate for non-entertainment applications. On the other hand, characters which appear almost human can cause discomfort and negative evaluation from the users. Since virtual reality can induce realistic responses in people, inappropriate character representations can have psychologically damaging effects on the users. The ForEVR project proposes a radical approach to solving the problem of character design by focusing on the appeal of character motion to compensate for the issues created by the realistic appearance. It will employ rigorous methods to define, implement and evaluate motion processing techniques for realistic characters in order to improve their overall appeal, while being suitable for non-entertainment applications. Users' response to the characters will be evaluated using standard measures from psychology and novel measures of EVR. The ForEVR project joins interdisciplinary (psychology/computer graphics) and involves one of the foremost research institutes with the research team specializing in creating behavior for autonomous virtual humans, while collaborating with an industry partner who is developing EVR for mental health and rehabilitation applications.

8.3 National initiatives

Défi Ys.AI

Participants: Ludovic Hoyet, Philippe De Clermont Gallerande.

With the recent annoucements about massive investments on the Metaverses, which are seen as the future of the social and professional immersive communication for the emergent AI-based e-society, there is a need for the development of dedicated metaverse technologies and associated representation formats. In this context, the objective of this joint project between Inria and InterDigital is to focus on the representation formats of digital avatars and their behavior in a digital and responsive environment. In particular, the primary challenge tackled in this project consists in solving the uncanny valley effect to provide users with a natural and lifelike social interaction between real and virtual actors, leading to full engagement in those future metaverse experiences.

ANR Animation Conductor

Participants: Marc Christie, Ludovic Hoyet.

Duration: Oct 2023 - Oct 2027 Team funding: 286k€ Partners: LiX in Polytechnique and Dada Animation company

The fundamental idea of the ANR Animation Conductor project is to (i) express simultaneous multimodal inputs as high-level animation principles into a motion characteristics space, (ii) exploit spatial and temporal characteristics of the input signals to edit existing animations using learning techniques inspired by style transfer, (iii) combine the style transfer techniques with authoring constraints such as physics-based and (iv) co-design interactive tools with creative artists to exploit them in industrial pipelines. More precisely, we first aim at providing new ways of understanding on which part of an animation a “conductor” (i.e. animator or supervisor) is working, to build a correlation between different input signals and output animation curves by designing a dataset with experienced animators. Secondly, we aim at designing new computational models to efficiently modify 3D animations from mimics through the use of individual or combined input modalities - namely records of voices and sounds, video records of body parts gestures, and 3D space-time acquisition from lightweight VR worn or mounted systems, while not requiring full MOCAP infrastructure. To this end, we propose to develop novel methods able to extract the spatial and temporal authoring potential of these modalities into a motion characteristics space, as well as exploring new ways to leverage the use of combined modalities to ease 3D animation control, inspired from style-transfer techniques in animation, and to compose with authoring constraints. This project targets direct applications and prototypes, starting with our open-source one, within French animation studios for the refinement of existing animations for shape and character subparts.

9 Dissemination

Participants: Ludovic Hoyet, Julien Pettré, Marc Christie, Anne-Hélène Olivier, Katja Zibrek.

9.1 Promoting scientific activities

9.1.1 Scientific events: organisation

  • ACM MIG 2023 - The 16th Annual ACM SIGGRAPH Conference on Motion, Interaction and Games. VirtUs organized the 16th Annual ACM SIGGRAPH Conference on Motion, Interaction and Games in Rennes from November 15th to November 17th, 2023. The event was hybrid, and gathered around 80 participants among which 60 were physically present.
  • Symposium of the International Society of Posture and Gait Research (ISPGR). Anne-Hélène Olivier co-organizes the Symposium "Stepping forward with immersive technology to study, assess, and intervene on locomotor behaviour" that was held in Australia in July 2023. The event gathered around 60 participants.
General chair, scientific chair
  • Julien Pettré was conference chair for the 16th Annual ACM SIGGRAPH Conference on Motion, Interaction and Games in Rennes from November 15th to November 17th, 2023 (ACM MIG 2023).
Member of the organizing committees
  • ACM IMX 2023. Marc Christie was Workshop chair (8 workshops).

9.1.2 Scientific events: selection

Chair of conference program committees
  • ISMAR 2023. Anne-Hélène Olivier was a conference paper chair.
  • IEEE IROS 2023. Julien Pettré was Editor for IEEE IROS 2023.
Member of the conference program committees
  • Ludovic Hoyet: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2023, ACM Motion, Interaction and Games 2023, ACM Symposium on Applied Perception 2023, ACM Symposium on Computer Animation 2023, IEEE International Conference on Artificial Intelligence and extended and Virtual Reality 2023, journées Françaises de l'Informatique Graphique 2023
  • Anne-Hélène Olivier: IEEE VR (journal track), ACM Symposium on Applied Perception, ACM Symposium on Virtual Reality Software and Technology.
  • Marc Christie: ACM Multimedia, ACM Motion in Games, ACM Computer Animation and Social Agents
Reviewer
  • Ludovic Hoyet: ACM Siggraph 2023, ACM Siggraph Asia 2023, ACM CHI Conference on Human Factors in Computing Systems 2023, IEEE ISMAR 2023, IEEE Virtual Reality 2024
  • Anne-Hélène Olivier: SIGGRAPH, Doctoral Consortium IEEE VR, IROS, ISMAR, CEIG, CASA, JFAPA
  • Katja Zibrek: ACM Motion, Interaction and Games, IEEE ISMAR, ACM CHI Conference on Human Factors in Computing Systems 2023, IEEE Virtual Reality, ACM Symposium on Applied Perception, ACM Symposium on Virtual Reality Software and Technology (VRST)
  • Marc Christie: ACM Multimedia, ACM IMX, IROS, ACM Motion in Games, ACM SIGGRAPH, ACM SIGGRAPH ASIA.

9.1.3 Journal

Member of the editorial boards
  • Anne-Hélène Olivier: Associate editor of IEEE Transactions on Visualization and Computer Graphics journal.
  • Julien Pettré is Associate Editor for Computer Graphics forum, Computer Animation and Virtual Worlds, Collective Dynamics
Reviewer - reviewing activities
  • Ludovic Hoyet: IEEE Transactions on Visualization and Computer Graphics, International Journal of Human - Computer Studies, Computer and Graphics
  • Anne-Hélène Olivier: IEEE Transactions on Visualization and Computer Graphics
  • Julien Pettré: Safety Science, Elsevier Physica A, Transportation Research part C, Royal Society Open Science, Elsevier Computers and Graphics
  • Katja Zibrek: ACM Transactions on Applied Perception, Computer and Graphics

9.1.4 Invited talks

  • Ludovic Hoyet. Demain, des Avatars plus Interactifs et Incarnés. PIX Festival 2023, Lille (France).
  • Ludovic Hoyet. Humains Virtuels Animés: une Approche Basé Perception. Cycle de congérences “Vers l'émergence d'un droit neuro-éthique en contrepoint des droits revisités par le numérique ? Réflexion à partir du droit de la consommation”. Atelier 2, 9th of Nov. 2023, Paris (France).
  • Olivier, AH. Social agents and non-verbal interactions: when Movement Science meets Virtual Reality, Trinity College Dublin, Dublin, Ireland, June 7, 2023.
  • Olivier, AH. Social agents and non-verbal interactions: when Movement Science meets Virtual Reality, Keynote Speaker, Computer Animation and Social Agent Conference, Limassol, Cyprus, May 30, 2023.
  • Olivier, AH. Interactions between pedestrians - from real to virtual studies: validation, applications and future directions. Inria Morphéo, Grenoble, France, May 11, 2023.
  • Olivier, AH. Interactions entre piétons : des études réelles aux études en environnement virtuel. Sports Sciences seminar, Nanterre, France, March 9, 2023.
  • Julien Pettré. EuroSonic 2023 - YES Group Seminar (Yourope Event Safety Group): “Hellfest Festival Experiments on Dense Crowds”
  • Julien Pettré. Mille Plateaux 2023 - Choregraphy National Center La Rochelle: “Danse crowds, dense crowds”
  • Julien Pettré. SNCF Seminar “Comprendre l'affluence à quai entre comportements et statistiques” invited par S. Morgagni.
  • Julien Pettré. Seminar at University of Bourgogne, Dijon invited by C. Demonceaux.
  • Katja Zibrek. Presentation at a seminar: “Behavioral methods in VR”, School of Psychology, University of Aberdeen, UK, 5th October 2023.
  • Katja Zibrek. Presentation and round table discussion at the “ResoFeux” conference in Nancy, France, 7th December 2023.
  • Marc Christie, Keynote talk at ECCV workshop Creative Video Editing and Understanding "Understanding Style in Movies", 10th October 2023.
  • Marc Christie, Round table at ECCV workshop Creative Video Editing and Understanding "AI Generated video contents: perspectives and challenges", 10th October 2023.

9.1.5 Leadership within the scientific community

  • Symposium on Applied Perception. Anne-Hélène Olivier is chairing the steering committee of this international society which gathers researchers from the fields of Perception, Psychology and Computer Graphics.

9.1.6 Scientific expertise

  • Marc Christie
    • Member of the Centre National de la Cinematographie et de l'image animé, Aides aux moyens techniques : collège « production numérique »
    • Member of the for Agence Nationale de la Recherche CE 38
    • Reviewer for H2020 Cascade funding proposals (EMIL XR)
    • Reviewer for Industry research proposals by BPI (Banque Publique d'investissement)
    • Reviewer for Fondation Canadienne de l'Innovation - Québec, Canada

9.1.7 Research administration

  • Ludovic Hoyet is in charge of the “Virtual Reality, Virtual Humans, Interactions and Robotics” department of the IRISA laboratory (Institut de Recherche en Informatique et Systèmes Aléatoires). This department includes four Inria teams (Hybrid, MimeTIC, Rainbow, Virtus) and is aligned with one of the strategic objectives of the Inria Rennes centre described in the Inria COP (i.e., “humans-robots-virtual worlds interactions”).

9.2 Teaching - Supervision - Juries

9.2.1 Teaching

  • Master : Ludovic Hoyet, Motion Analysis and Gesture Recognition, 12h, INSA Rennes, France
  • Master : Ludovic Hoyet, Computer Graphics, 8h, Ecole Normale Supérieure de Rennes, France
  • Master : Ludovic Hoyet, Réalité Virtuelle pour l'Analyse Ergonomique, Master Ingénierie et Ergonomie des Activités Physique, 21h, University Rennes 2, France
  • Master: Marc Christie, Head of Master 2 Ingénierie Logicielle (45 students), University of Rennes 1, France
  • Master: Marc Christie, "Multimedia Mobile", Master 2, leader of the module, 32h (IL) + 32h (Miage), Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Projet Industriel Transverse", Master 2, 32h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Modelistion Animation Rendu", Master 2, 16h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Web Engineering", Master 1, 16h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Advanced Computer Graphics", Master 1, 10h, leader of the module, Computer Science, ENS, France
  • Master: Marc Christie, "Motion for Animation and Robotics", Master 2 SIF, Computer Science, France
  • Master : Anne-Hélène Olivier, co-leader of the APPCM Master (60 students) "Activités Physiques et Pathologies Chroniques et Motrices", STAPS, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Recueil et traitement des données", 26H, Master 1 and 2 APPCM/IEAP/EOPS, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Evaluation fonctionnelle des pathologies motrices", 3H Master 2 APPCM, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Méthodologie de la recherche et accompagnement de stage", 15H, Master 1 and 2 APPCM, University Rennes2, France
  • Licence : Anne-Hélène Olivier, "Analyse cinématique du mouvement", 100H , Licence 1, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique spécifique aux APA", 20H , Licence 3, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique de l'avance en âge", 12H , Licence 3, University Rennes 2, France

9.2.2 Supervision

  • PhD defended (beginning Sept. 2019, defended Feb. 2023): Alberto Jovane, Modélisation de mouvements réactifs et comportements non verbaux pour la création d'acteurs digitaux pour la réalité virtuelle, Marc Christie, Ludovic Hoyet, Claudio Pacchierotti (Rainbow team), Julien Pettré.
  • PhD in defended (beginning Jun. 2020, defended May 2023): Lucas Mourot, Learning-Based Human Character Animation Synthesis for Content Production, Pierre Hellier (InterDigital), Ludovic Hoyet, François Le Clerc (InterDigital).
  • PhD defended (beginning Nov. 2020, defended Dec. 2023): Thomas Chatagnon, Micro-to-macro energy-based interaction models for dense crowds behavioral simulations, Ecole normale supérieure de Rennes, Ludovic Hoyet, Anne-Hélène Olivier, Charles Pontonnier (MimeTIC), Julien Pettré.
  • PhD defended (beginning Nov. 2020, defended Nov. 2023): Ariel Kwiatkowski, Simulation de foules avec l'apprentissage par renforcement, Ecole Polytechnique, Marie-Paule Cani, Julien Pettré.
  • PhD in progress (beginning Sept. 2020-defence Feb 2024): Agathe Bilhaut, Stratégies perceptivo-motrices durant la locomotion des patients douloureux chroniques : nouvelles méthodes d'analyse et de suivi, Armel Crétual, Anne-Hélène Olivier, Mathieu Ménard (Institut Ostéopathie Rennes, M2S)
  • PhD in progress (beginning Oct. 2020): Emilie Leblong, Prise en compte des interactions sociales dans un simulateur de conduite de fauteuil roulant électrique en réalité virtuelle : favoriser l'apprentissage pour une mobilité inclusive, Anne-Hélène Olivier, Marie Babel (Rainbow team)
  • PhD in progress (beginning Nov. 2020): Tairan Yin, Création de scènes peuplées dynamiques pour la réalité virtuelle, Marc Christie, Marie-Paule Cani (LIX), Ludovic Hoyet, Julien Pettré.
  • PhD in progress (beginning Oct. 2020): Jean-Baptiste Bordier, Innovative interfaces for the creation of character animations, Marc Christie
  • PhD in progress (beginning Nov. 2021): Rim Rekik Dit Nekhili, Learning and evaluating 3D human motion synthesis, Anne-Hélène Olivier, Stefanie Wuhrer (Morpheo team).
  • PhD in progress (beginning Oct. 2021): Xiaoyuan Wang, Realistic planning of hand motions, with Marc Christie, Adnane Boukhayma (MimeTIC team).
  • PhD in progress (beginning Jan. 2022): Yulia Patotskaya, Appealing Character Design for Embodied Virtual Reality, Ludovic Hoyet, Julien Pettré, Katja Zibrek.
  • PhD in progress (beginning Oct. 2022): Jordan Martin, Simulation immersive de foule pour l’étude et l’aménagement de lieux destinés à accueillir du public, Ludovic Hoyet, Jean-Luc Paillat, Julien Pettré, Etienne Pinsard.
  • PhD in progress (beginning Oct. 2022): Alexis Jensen, Simulation de foule dense par modélisation dynamique, Julien Pettré, Charles Pontonnier (MimeTIC).
  • PhD in progress (beginning Feb. 2023): Philippe De Clermont Gallerande (CIFRE InterDigital), Deep-based semantic representation of avatars for virtual reality, Ferran Argelaguet (Hybrid team), Quentin Avril (InterDigital), Philippe-Henri Gosselin (InterDigital), Ludovic Hoyet.
  • PhD in progress (beginning Feb. 2023): Tony Wolff, Creating socially reactive virtual characters for enhanced social interactions in Virtual Reality, Ludovic Hoyet, Anne-Hélène Olivier, Julien Pettré, Katja Zibrek.
  • PhD in progress (beginning Sept. 2023): Celine Finet, Peuplement de scène par simulation de foule basée apprentissage, Julien Pettré.
  • PhD in progress (beginning Oct. 2023): Kelian Baert, Face Editing for Digital Visual Effects in Film Production, with March Christie, Adnane Boukhayma

9.2.3 Juries

HDR
  • Anne-Hélène Olivier: Régis Lobjois (Septembre 2023) “Evaluation des simulateurs de conduite : De la validité comportementale à la validité psychologique”. Nantes Université - Examiner
PhD Thesis
  • Ludovic Hoyet: Rémi Poivet (Dec. 2023) “L'intelligence et la crédibilité perçues des agents virtuels dans les jeux vidéo : Exploration de la relation entre les facteurs de conception et les attentes des joueurs.” Sorbonne Université - Examiner.
  • Anne-Hélène Olivier: Yann-Romain Kechabia (Décembre 2023) “Approche synergique comportementale pour découvrir des déficiences d'interactions vision-posture-attention dans la maladie de Parkinson. ” Université de Lille - Examiner.
  • Anne-Hélène Olivier: Mathieu Thomas (Octobre 2023) “Visualiser les affordances : la clef pour l'appontage d'hélicoptère ? ”. Aix-Marseille Université - Examiner.
  • Anne-Hélène Olivier: Daphné van Opstal (Juillet 2023), “The informational basis of human locomotor interception behavior: lessons from curvilinear target trajectories”. Aix-Marseille Université - Reviewer
  • Anne-Hélène Olivier: Mirzaei Mohammadreza, (Juin 2023), “Auditory Sensory Substitution in Virtual Reality for People with Hearing Impairments” Faculty of Informatics TU Wien, Austria - Reviewer
  • Anne-Hélène Olivier: Goksu Yamac (Juin 2023), "Understanding and Improving Physical Interactions in Virtual Reality". School of Computer Science and Statistics, Trinity College Dublin - Reviewer
  • Anne-Hélène Olivier: Mathieu Marsot (Mai 2023), "Data driven representation and synthesis of 3D human motion", Univ Grenoble Alpes - Examiner
  • Julien Pettré: Mathias Guy Delahaye (June 30, 2023), “From finger animation to full-body embodiment of avatars with different morphology and proportions”, EPFL - Reviewer
Master
  • Anne-Hélène Olivier: Master Defence APAS and IEAP

9.3 Popularization

  • Participation to Pint of Science, Anne-Hélène Olivier: L'humain virtuel, un humain comme les autres ? Rennes, May 24, 2023.
  • Reality-Check festival presentation, Katja Zibrek: Virtually Possible: Perceiving Virtual Humans in VR, Enschede, Netherlands, 30th March 2023.

9.3.1 Education

  • Invited scientist in College du Querpon : "Sciences de l'Information et Intelligence Artificielle".

10 Scientific production

10.1 Major publications

10.2 Publications of the year

International journals

International peer-reviewed conferences

Conferences without proceedings

  • 17 inproceedingsA.Alexandre Bruckert and M.Marc Christie. High-level cinematic knowledge to predict inter-observer visual congruency.WICED x Cinemotions 2023 - Workshop on Intelligent Cinematography and Editing, and Emotions in MoviesNantes, France2023, 1-6HALDOIback to text

  1. 1To illustrate the benefit of immersive simulation, note that we account for this robot's real mechanical behaviour and processing capabilities because the robot is immersed and not simulated within this virtual public space.