Homepage Inria website
  • Inria login
  • The Inria's Research Teams produce an annual Activity Report presenting their activities and their results of the year. These reports include the team members, the scientific program, the software developed by the team and the new results of the year. The report also describes the grants, contracts and the activities of dissemination and teaching. Finally, the report gives the list of publications of the year.

  • Legal notice
  • Cookie management
  • Personal data
  • Cookies

Activity report
RNSR: 201120991Y
In partnership with:
Université Haute Bretagne (Rennes 2), Université Rennes 1, École normale supérieure de Rennes
Team name:
Analysis-Synthesis Approach for Virtual Human Simulation
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Perception, Cognition and Interaction
Interaction and visualization
Creation of the Team: 2011 January 01, updated into Project-Team: 2014 January 01


  • A5.1.3. Haptic interfaces
  • A5.1.5. Body-based interfaces
  • A5.1.9. User and perceptual studies
  • A5.4.2. Activity recognition
  • A5.4.5. Object tracking and motion analysis
  • A5.4.8. Motion capture
  • A5.5.4. Animation
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.10.3. Planning
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.11.1. Human activity analysis and recognition
  • A6. Modeling, simulation and control
  • B1.2.2. Cognitive science
  • B2.5. Handicap and personal assistances
  • B2.8. Sports, performance, motor skills
  • B5.1. Factory of the future
  • B5.8. Learning and training
  • B7.1.1. Pedestrian traffic and crowds
  • B9.2.2. Cinema, Television
  • B9.2.3. Video games
  • B9.4. Sports

1 Team members, visitors, external collaborators

Research Scientists

  • Franck Multon [Team leader, Inria, Senior Researcher, HDR]
  • Adnane Boukhayma [Inria, Researcher]
  • Ludovic Hoyet [Inria, Researcher]
  • Katja Zibrek [Inria, Starting Research Position, from Feb 2020]

Faculty Members

  • Nicolas Bideau [Univ Rennes 2, Associate Professor]
  • Benoit Bideau [Univ Rennes 2, Professor, HDR]
  • Marc Christie [Univ Rennes 1, Associate Professor]
  • Armel Crétual [Univ Rennes 2, Associate Professor, HDR]
  • Georges Dumont [École normale supérieure de Rennes, Professor, HDR]
  • Diane Haering [Univ Rennes 2, Associate Professor]
  • Simon Kirchhofer [École normale supérieure de Rennes, ATER, from Sep 2020]
  • Richard Kulpa [Univ Rennes 2, Associate Professor, HDR]
  • Fabrice Lamarche [Univ Rennes 1, Associate Professor]
  • Guillaume Nicolas [Univ Rennes 2, Associate Professor]
  • Anne-Hélène Olivier [Univ Rennes 2, Associate Professor]
  • Charles Pontonnier [École normale supérieure de Rennes, Associate Professor, HDR]

Post-Doctoral Fellows

  • Pierre Raimbaud [Inria, from Dec 2020]
  • Bhuvaneswari Sarupuri [Univ Rennes 2, From October 2020]

PhD Students

  • Vicenzo Abichequer-Sangalli [Inria, from Nov. 2020, co-supervised with Rainbow]
  • Jean Basset [Inria, co-supervised with Morpheo]
  • Florian Berton [Inria, co-supervised with Rainbow]
  • Jean Baptiste Bordier [Univ Rennes 1, from Oct 2020]
  • Hugo Brument [Univ Rennes 1, Co-supervised with Rainbow and Hybrid teams]
  • Ludovic Burg [Univ Rennes 1]
  • Thomas Chatagnon [Inria, From Nov. 2020, co-supervised with Rainbow]
  • Adèle Colas [Inria, co-supervised with Rainbow]
  • Erwan Delhaye [Univ Rennes 2]
  • Louise Demestre [École normale supérieure de Rennes]
  • Diane Dewez [Univ Rennes1, co-supervised with Hybrid]
  • Charles Faure [Univ Rennes 2, until Aug 2020]
  • Rebecca Fribourg [Inria, co-supervised with Hybrid]
  • Olfa Haj Mahmoud [Faurecia]
  • Nils Hareng [Univ Rennes 2]
  • Simon Hilt [École normale supérieure de Rennes, until Aug 2020]
  • Alberto Jovane [Inria, co-supervised with Rainbow]
  • Qian Li [Inria, from Oct 2020]
  • Annabelle Limballe [Univ Rennes 2]
  • Claire Livet [École normale supérieure de Rennes]
  • Amaury Louarn [Univ Rennes 1, until Nov 2020]
  • Pauline Morin [École normale supérieure de Rennes, from Sep 2020]
  • Lucas Mourot [InterDigital, from Jun 2020]
  • Benjamin Niay [Inria]
  • Nicolas Olivier [InterDigital]
  • Theo Perrin [École normale supérieure de Rennes, until Aug 2020]
  • Pierre Puchaud [Fondation Saint-Cyr, until Nov 2020]
  • Carole Puil [IFPEK Rennes]
  • Pierre Touzard [Univ Rennes 2]
  • Alexandre Vu [Univ Rennes 2]
  • Xi Wang [Univ Rennes 1]
  • Tairan Yin [Inria, from Nov. 2020, co-supervised with Rainbow]
  • Mohamed Younes [Inria, from Dec 2020]

Technical Staff

  • Robin Adili [Inria, Engineer, from Sep 2020]
  • Ronan Gaugne [Univ Rennes 1, Engineer]
  • Shubhendu Jena [Inria, Engineer, from Dec 2020]
  • Anthony Mirabile [SATT Ouest Valorisation, Engineer]
  • Adrien Reuzeau [Inria, Engineer, from Oct 2020]
  • Anthony Sorel [Univ Rennes 2, Engineer]
  • Xiaofang Wang [Univ de Reims Champagne-Ardennes, Engineer, from May 2020]

Interns and Apprentices

  • Robin Adili [Inria, from Feb 2020 until Jul 2020]
  • Jean Peic Chou [Univ Rennes 1, from May 2020 until Aug 2020]
  • Thomas Kergoat [Inria, until Apr 2020]
  • Dan Mahoro [Inria, from Jun 2020]
  • Florence Maqueda [Inria, from May 2020 until Jul 2020]
  • Lendy Mulot [École normale supérieure de Rennes, from May 2020 until Jul 2020]
  • Pierre Nicot-Berenger [Univ Rennes 2, until May 2020]
  • Steven Picard [Inria, until Jul 2020]

Administrative Assistant

  • Nathalie Denis [Inria]

Visiting Scientist

  • Iana Podkosova [Université de Vienne - Autriche, from Sep 2020 until Nov 2020]

2 Overall objectives

2.1 Presentation

MimeTIC is a multidisciplinary team whose aim is to better understand and model human activity in order to simulate realistic autonomous virtual humans: realistic behaviors, realistic motions and realistic interactions with other characters and users. It leads to modeling the complexity of a human body, as well as of his environment where he can pick-up information and he can act on it. A specific focus is dedicated to human physical activity and sports as it raises the highest constraints and the highest complexity when addressing these problems. Thus, MimeTIC is composed of experts in computer science whose research interests are computer animation, behavioral simulation, motion simulation, crowds and interaction between real and virtual humans. MimeTIC is also composed of experts in sports science, motion analysis, motion sensing, biomechanics and motion control. Hence, the scientific foundations of MimeTIC are motion sciences (biomechanics, motion control, perception-action coupling, motion analysis), computational geometry (modeling of the 3D environment, motion planning, path planning) and design of protocols in immersive environments (use of virtual reality facilities to analyze human activity).

Thanks to these skills, we wish to reach the following objectives: to make virtual humans behave, move and interact in a natural manner in order to increase immersion and to improve knowledge on human motion control. In real situations (see Figure 1), people have to deal with their physiological, biomechanical and neurophysiological capabilities in order to reach a complex goal. Hence MimeTIC addresses the problem of modeling the anatomical, biomechanical and physiological properties of human beings. Moreover these characters have to deal with their environment. Firstly they have to perceive this environment and pick-up relevant information. MimeTIC thus addresses the problem of modeling the environment including its geometry and associated semantic information. Secondly, they have to act on this environment to reach their goals. It leads to cognitive processes, motion planning, joint coordination and force production in order to act on this environment.

Main objective of MimeTIC: to better understand human activity in order to improve virtual human simulations. It involves modeling the complexity of human bodies, as well as of environments where to pick-up information and act upon.
Figure 1: Main objective of MimeTIC: to better understand human activity in order to improve virtual human simulations. It involves modeling the complexity of human bodies, as well as of environments where to pick-up information and act upon.

In order to reach the above objectives, MimeTIC has to address three main challenges:

  • dealing with the intrinsic complexity of human beings, especially when addressing the problem of interactions between people for which it is impossible to predict and model all the possible states of the system,
  • making the different components of human activity control (such as the biomechanical and physical, the reactive, cognitive, rational and social layers) interact while each of them is modeled with completely different states and time sampling,
  • and being able to measure human activity while balancing between ecological and controllable protocols, and to be able to extract relevant information in wide databases of information.

Contrary to many classical approaches in computer simulation, which mostly propose simulation without trying to understand how real people do, the team promotes a coupling between human activity analysis and synthesis, as shown in Figure 2.

Research path of MimeTIC: coupling analysis and synthesis of human activity enables us to create more realistic autonomous characters and to evaluate assumptions about human motion control.
Figure 2: Research path of MimeTIC: coupling analysis and synthesis of human activity enables us to create more realistic autonomous characters and to evaluate assumptions about human motion control.

In this research path, improving knowledge on human activity enables us to highlight fundamental assumptions about natural control of human activities. These contributions can be promoted in e.g. biomechanics, motion sciences, neurosciences. According to these assumptions we propose new algorithms for controlling autonomous virtual humans. The virtual humans can perceive their environment and decide of the most natural action to reach a given goal. This work is promoted in computer animation, virtual reality and has some applications in robotics through collaborations. Once autonomous virtual humans have the ability to act as real humans would in the same situation, it is possible to make them interact with others, i.e., autonomous characters (for crowds or group simulations) as well as real users. The key idea here is to analyze to what extent the assumptions proposed at the first stage lead to natural interactions with real users. This process enables the validation of both our assumptions and our models.

Among all the problems and challenges described above, MimeTIC focuses on the following domains of research:

  • motion sensing which is a key issue to extract information from raw motion capture systems and thus to propose assumptions on how people control their activity,
  • human activity & virtual reality, which is explored through sports application in MimeTIC. This domain enables the design of new methods for analyzing the perception-action coupling in human activity, and to validate whether the autonomous characters lead to natural interactions with users,
  • interactions in small and large groups of individuals, to understand and model interactions with lot of individual variability such as in crowds,
  • virtual storytelling which enables us to design and simulate complex scenarios involving several humans who have to satisfy numerous complex constraints (such as adapting to the real-time environment in order to play an imposed scenario), and to design the coupling with the camera scenario to provide the user with a real cinematographic experience,
  • biomechanics which is essential to offer autonomous virtual humans who can react to physical constraints in order to reach high-level goals, such as maintaining balance in dynamic situations or selecting a natural motor behavior among the whole theoretical solution space for a given task,
  • and autonomous characters which is a transversal domain that can reuse the results of all the other domains to make these heterogeneous assumptions and models provide the character with natural behaviors and autonomy.

3 Research program

3.1 Biomechanics and Motion Control

Human motion control is a highly complex phenomenon that involves several layered systems, as shown in Figure 3. Each layer of this controller is responsible for dealing with perceptual stimuli in order to decide the actions that should be applied to the human body and his environment. Due to the intrinsic complexity of the information (internal representation of the body and mental state, external representation of the environment) used to perform this task, it is almost impossible to model all the possible states of the system. Even for simple problems, there generally exists an infinity of solutions. For example, from the biomechanical point of view, there are much more actuators (i.e. muscles) than degrees of freedom leading to an infinity of muscle activation patterns for a unique joint rotation. From the reactive point of view there exists an infinity of paths to avoid a given obstacle in navigation tasks. At each layer, the key problem is to understand how people select one solution among these infinite state spaces. Several scientific domains have addressed this problem with specific points of view, such as physiology, biomechanics, neurosciences and psychology.

Layers of the motion control natural system in humans.
Figure 3: Layers of the motion control natural system in humans.

In biomechanics and physiology, researchers have proposed hypotheses based on accurate joint modeling (to identify the real anatomical rotational axes), energy minimization, force and torques minimization, comfort maximization (i.e. avoiding joint limits), and physiological limitations in muscle force production. All these constraints have been used in optimal controllers to simulate natural motions. The main problem is thus to define how these constraints are composed altogether such as searching the weights used to linearly combine these criteria in order to generate a natural motion. Musculoskeletal models are stereotyped examples for which there exists an infinity of muscle activation patterns, especially when dealing with antagonist muscles. An unresolved problem is to define how to use the above criteria to retrieve the actual activation patterns, while optimization approaches still leads to unrealistic ones. It is still an open problem that will require multidisciplinary skills including computer simulation, constraint solving, biomechanics, optimal control, physiology and neuroscience.

In neuroscience, researchers have proposed other theories, such as coordination patterns between joints driven by simplifications of the variables used to control the motion. The key idea is to assume that instead of controlling all the degrees of freedom, people control higher level variables which correspond to combinations of joint angles. In walking, data reduction techniques such as Principal Component Analysis have shown that lower-limb joint angles are generally projected on a unique plane whose angle in the state space is associated with energy expenditure. Although knowledge exists for specific motions, such as locomotion or grasping, this type of approach is still difficult to generalize. The key problem is that many variables are coupled and it is very difficult to objectively study the behavior of a unique variable in various motor tasks. Computer simulation is a promising method to evaluate such type of assumptions as it enables to accurately control all the variables and to check if it leads to natural movements.

Neuroscience also addresses the problem of coupling perception and action by providing control laws based on visual cues (or any other senses), such as determining how the optical flow is used to control direction in navigation tasks, while dealing with collision avoidance or interception. Coupling of the control variables is enhanced in this case as the state of the body is enriched by the large amount of external information that the subject can use. Virtual environments inhabited with autonomous characters whose behavior is driven by motion control assumptions is a promising approach to solve this problem. For example, an interesting problem in this field is navigation in an environment inhabited with other people. Typically, avoiding static obstacles together with other people displacing into the environment is a combinatory problem that strongly relies on the coupling between perception and action.

One of the main objectives of MimeTIC is to enhance knowledge on human motion control by developing innovative experiments based on computer simulation and immersive environments. To this end, designing experimental protocols is a key point and some of the researchers in MimeTIC have developed this skill in biomechanics and perception-action coupling. Associating these researchers to experts in virtual human simulation, computational geometry and constraints solving enable us to contribute to enhance fundamental knowledge in human motion control.

3.2 Experiments in Virtual Reality

Understanding interactions between humans is challenging because it addresses many complex phenomena including perception, decision-making, cognition and social behaviors. Moreover, all these phenomena are difficult to isolate in real situations, and it is therefore highly complex to understand their individual influence on these human interactions. It is then necessary to find an alternative solution that can standardize the experiments and that allows the modification of only one parameter at a time. Video was first used since the displayed experiment is perfectly repeatable and cut-offs (stop the video at a specific time before its end) allow having temporal information. Nevertheless, the absence of adapted viewpoint and stereoscopic vision does not provide depth information that are very meaningful. Moreover, during video recording session, the real human is acting in front of a camera and not of an opponent. The interaction is then not a real interaction between humans.

Virtual Reality (VR) systems allow full standardization of the experimental situations and the complete control of the virtual environment. It is then possible to modify only one parameter at a time and to observe its influence on the perception of the immersed subject. VR can then be used to understand what information is picked up to make a decision. Moreover, cut-offs can also be used to obtain temporal information about when information is picked up. When the subject can moreover react as in a real situation, his movement (captured in real time) provides information about his reactions to the modified parameter. Not only is the perception studied, but the complete perception-action loop. Perception and action are indeed coupled and influence each other as suggested by Gibson in 1979.

Finally, VR allows the validation of virtual human models. Some models are indeed based on the interaction between the virtual character and the other humans, such as a walking model. In that case, there are two ways to validate it. First, they can be compared to real data (e.g. real trajectories of pedestrians). But such data are not always available and are difficult to get. The alternative solution is then to use VR. The validation of the realism of the model is then done by immersing a real subject in a virtual environment in which a virtual character is controlled by the model. Its evaluation is then deduced from how the immersed subject reacts when interacting with the model and how realistic he feels the virtual character is.

3.3 Computer Animation

Computer animation is the branch of computer science devoted to models for the representation and simulation of the dynamic evolution of virtual environments. A first focus is the animation of virtual characters (behavior and motion). Through a deeper understanding of interactions using VR and through better perceptive, biomechanical and motion control models to simulate the evolution of dynamic systems, the Mimetic team has the ability to build more realistic, efficient and believable animations. Perceptual study also enables us to focus computation time on relevant information (i.e. leading to ensure natural motion from the perceptual points of view) and save time for unperceived details. The underlying challenges are (i) the computational efficiency of the system which needs to run in real-time in many situations, (ii) the capacity of the system to generalise/adapt to new situations for which data was not available or for which models were not defined for, and (iii) the variability of the models, i.e. their ability to handle many body morphologies and generate variations in motions that would be specific to each virtual character.

In many cases, however, these challenges cannot be addressed in isolation. Typically character behaviors also depend on the nature and the topology of the environment they are surrounded by. In essence, a character animation system should also rely on smarter representations of the environments, in order to better perceive the environment, and take contextualised decisions. Hence the animation of virtual characters in our context often requires to be coupled with models to represent the environment, reason, and plan both at a geometric level (can the character reach this location), and at a semantic level (should it use the sidewalk, the stairs, or the road). This represents the second focus. Underlying challenges are the ability to offer a compact, yet precise representation on which efficient path and motion planning can be performed, and on which high-level reasonning can be achieved.

Finally, a third scientific focus tied to the computer animation axis is digital storytelling. Evolved representations of motions and environments enable realistic animations. It is yet equally important to question how these event should be portrayed, when and under which angle. In essence, this means integrating discourse models into story models, the story representing the sequence of events which occur in a virtual environment, and the discourse representing how this story should be displayed (ie which events to show in which order and with which viewpoint). Underlying challenges are pertained to (i) narrative discourse representations, (ii) projections of the discourse into the geometry, planning camera trajectories and planning cuts between the viewpoints and (iii) means to interactively control the unfolding of the discourse.

By therefore establishing the foundations to build bridges between the high-level narrative structures, the semantic/geometric planning of motions and events, and low-level character animations, the Mimetic team adopts a principled and all-inclusive approach to the animation of virtual characters.

4 Application domains

4.1 Animation, Autonomous Characters and Digital Storytelling

Computer Animation is one of the main application domain of the research work conducted in the MimeTIC team, in particular in relation to the entertainment and game industries. In these domains, creating virtual characters that are able to replicate real human motions and behaviours still highlights unanswered key challenges, especially as virtual characters are becoming more and required to populate virtual worlds. For instance, virtual characters are used to replace secondary actors and generate highly populated scenes that would be hard and costly to produce with real actors, which requires to create high quality replicas that appear, move and behave both individually and collectively like real humans. The three key challenges for the MimeTIC team are therefore (i) to create natural animations (i.e., virtual characters that move like real humans), (ii) to create autonomous characters (i.e., that behave like real humans) and (iii) to orchestrate the virtual characters so as to create interactive stories.

First, our challenge is therefore to create animations of virtual characters that are natural, in the largest sense of the term of moving like a real human real would. This challenge covers several aspects of Character Animation depending on the context of application, e.g., producing visually plausible or physically correct motions, producing natural motion sequences, etc. Our goal is therefore to develop novel methods for animating virtual characters, e.g., based on motion capture, data-driven approaches, or learning approaches. However, because of the complexity of human motion (e.g., the number of degrees of freedom that can be controled), resulting animations are not necessarily physically, biomechanically, or visually plaisible. For instance, current physics-based approaches produce physically correct motions but not necessarily perceptually plausible ones. All these reasons are why most entertainment industries still mainly rely on manual animation, e.g., in games and movies. Therefore, research in MimeTIC on character animation is also conducted with the goal of validating objective (e.g., physical, biomechanical) as well as subjective (e.g., visual plausibility) criteria.

Second, one of the main challenges in terms of autonomous characters is to provide a unified architecture for the modeling of their behavior. This architecture includes perception, action and decisional parts. This decisional part needs to mix different kinds of models, acting at different time scale and working with different nature of data, ranging from numerical (motion control, reactive behaviors) to symbolic (goal oriented behaviors, reasoning about actions and changes). For instance, autonomous characters play the role of actors that are driven by a scenario in video games and virtual storytelling. Their autonomy allows them to react to unpredictable user interactions and adapt their behavior accordingly. In the field of simulation, autonomous characters are used to simulate the behavior of humans in different kind of situations. They enable to study new situations and their possible outcomes. In the MimeTIC team, our focus is therefore not to reproduce the human intelligence but to propose an architecture making it possible to model credible behaviors of anthropomorphic virtual actors evolving/moving in real time in virtual worlds. The latter can represent particular situations studied by psychologists of the behavior or to correspond to an imaginary universe described by a scenario writer. The proposed architecture should mimic all the human intellectual and physical functions.

Finally, interactive digital storytelling, including novel forms of edutainment and serious games, provides access to social and human themes through stories which can take various forms and contains opportunities for massively enhancing the possibilities of interactive entertainment, computer games and digital applications. It provides chances for redefining the experience of narrative through interactive simulations of computer-generated story worlds and opens many challenging questions at the overlap between computational narratives, autonomous behaviours, interactive control, content generation and authoring tools. Of particular interest for the MimeTIC research team, virtual storytelling triggers challenging opportunities in providing effective models for enforcing autonomous behaviours for characters in complex 3D environments. Offering both low-level capacities to characters such as perceiving the environments, interacting with the environment and reacting to changes in the topology, on which to build higher-levels such as modelling abstract representations for efficient reasoning, planning paths and activities, modelling cognitive states and behaviours requires the provision of expressive, multi-level and efficient computational models. Furthermore virtual storytelling requires the seamless control of the balance between the autonomy of characters and the unfolding of the story through the narrative discourse. Virtual storytelling also raises challenging questions on the conveyance of a narrative through interactive or automated control of the cinematography (how to stage the characters, the lights and the cameras). For example, estimating visibility of key subjects, or performing motion planning for cameras and lights are central issues for which have not received satisfactory answers in the literature.

4.2 Fidelity of Virtual Reality

VR is a powerful tool for perception-action experiments. VR-based experimental platforms allow exposing a population to fully controlled stimuli that can be repeated from trial to trial with high accuracy. Factors can be isolated and objects manipulations (position, size, orientation, appearance, ..) are easy to perform. Stimuli can be interactive and adapted to participants’ responses. Such interesting features allow researchers to use VR to perform experiments in sports, motion control, perceptual control laws, spatial cognition as well as person-person interactions. However, the interaction loop between users and their environment differs in virtual conditions in comparison with real conditions. When a user interact in an environment, movement from action and perception are closely related. While moving, the perceptual system (vision, proprioception,..) provides feedback about the users’ own motion and information about the surrounding environment. That allows the user to adapt his/her trajectory to sudden changes in the environment and generate a safe and efficient motion. In virtual conditions, the interaction loop is more complex because it involves several material aspects.

First, the virtual environment is perceived through a numerical display which could affect the available information and thus could potentially introduce a bias. For example, studies observed a distance compression effect in VR, partially explained by the use of Head Mounted Display with reduced field of view and exerting a weight and torques on the user’s head. Similarly, the perceived velocity in a VR environment differs from the real world velocity, introducing an additional bias. Other factors, such as the image contrast, delays in the displayed motion and the point of view can also influence efficiency in VR. The second point concerns the user’s motion in the virtual world. The user can actually move if the virtual room is big enough or if wearing a head mounted display. Even with a real motion, authors showed that walking speed is decreased, personal space size is modified and navigation in VR is performed with increased gait instability. Although natural locomotion is certainly the most ecological approach, the physical limited size of VR setups prevents from using it most of the time. Locomotion interfaces are therefore required. Locomotion interfaces are made up of two components, a locomotion metaphor (device) and a transfer function (software), that can also introduce bias in the generated motion. Indeed, the actuating movement of the locomotion metaphor can significantly differ from real walking and the simulated motion depends on the transfer function applied. Locomotion interfaces cannot usually preserve all the sensory channels involved in locomotion.

When studying human behavior in VR, the aforementioned factors in the interaction loop potentially introduce bias both in the perception and in the generation of motor behavior trajectories. MimeTIC is working on the mandatory step of VR validation to make it usable for capturing and analyzing human motion.

4.3 Motion Sensing of Human Activity

Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user's body parts. Whatever the system is, one of the main problems is to be able to automatically recognize and analyze the user's performance according to poor and noisy signals. Human activity and motion are subject to variability: intra-variability due to space and time variations of a given motion, but also inter-variability due to different styles and anthropometric dimensions. MimeTIC has addressed the above problems in two main directions.

Firstly, we have studied how to recognize and quantify motions performed by a user when using accurate systems such as Vicon (product of Oxford Metrics), Qualisys, or Optitrack (product of Natural Point) motion capture systems. These systems provide large vectors of accurate information. Due to the size of the state vector (all the degrees of freedom) the challenge is to find the compact information (named features) that enables the automatic system to recognize the performance of the user. Whatever the method used, finding these relevant features that are not sensitive to intra-individual and inter-individual variability is a challenge. Some researchers have proposed to manually edit these features (such as a Boolean value stating if the arm is moving forward or backward) so that the expertise of the designer is directly linked with the success ratio. Many proposals for generic features have been proposed, such as using Laban notation which was introduced to encode dancing motions. Other approaches tend to use machine learning to automatically extract these features. However most of the proposed approaches were used to seek a database for motions which properties correspond to the features of the user's performance (named motion retrieval approaches). This does not ensure the retrieval of the exact performance of the user but a set of motions with similar properties.

Secondly, we wish to find alternatives to the above approach which is based on analyzing accurate and complete knowledge on joint angles and positions. Hence new sensors, such as depth-cameras (Kinect, product of Microsoft) provide us with very noisy joint information but also with the surface of the user. Classical approaches would try to fit a skeleton into the surface in order to compute joint angles which, again, lead to large state vectors. An alternative would be to extract relevant information directly from the raw data, such as the surface provided by depth cameras. The key problem is that the nature of these data may be very different from classical representation of human performance. In MimeTIC, we try to address this problem in specific application domains that require picking specific information, such as gait asymmetry or regularity for clinical analysis of human walking.

4.4 Sports

Sport is characterized by complex displacements and motions. One main objective is to understand the determinants of performance through the analysis of the motion itself. In the team, different sports have been studied such as the tennis serve, where the goal was to understand the contribution of each segment of the body in the performance but also the risk of injuries as well as other situation in cycling, swimming, fencing or soccer. Sports motions are dependent on visual information that the athlete can pick up in his environment, including the opponent's actions. Perception is thus fundamental to the performance. Indeed, a sportive action, as unique, complex and often limited in time, requires a selective gathering of information. This perception is often seen as a prerogative for action, it then takes the role of a passive collector of information. However, as mentioned by Gibson in 1979, the perception-action relationship should not be considered sequentially but rather as a coupling: we perceive to act but we must act to perceive. There would thus be laws of coupling between the informational variables available in the environment and the motor responses of a subject. In other words, athletes have the ability to directly perceive the opportunities of action directly from the environment. Whichever school of thought considered, VR offers new perspectives to address these concepts by complementary using real time motion capture of the immersed athlete.

In addition to better understand sports and interactions between athletes, VR can also be used as a training environment as it can provide complementary tools to coaches. It is indeed possible to add visual or auditory information to better train an athlete. The knowledge found in perceptual experiments can be for example used to highlight the body parts that are important to look at to correctly anticipate the opponent's action.

4.5 Ergonomics

The design of workstations nowadays tends to include assessment steps in a Virtual Environment (VE) to evaluate ergonomic features. This approach is more cost-effective and convenient since working directly on the Digital Mock-Up (DMU) in a VE is preferable to constructing a real physical mock-up in a Real Environment (RE). This is substantiated by the fact that a Virtual Reality (VR) set-up can be easily modified, enabling quick adjustments of the workstation design. Indeed, the aim of integrating ergonomics evaluation tools in VEs is to facilitate the design process, enhance the design efficiency, and reduce the costs.

The development of such platforms asks for several improvements in the field of motion analysis and VR. First, interactions have to be as natural as possible to properly mimic the motions performed in real environments. Second, the fidelity of the simulator also needs to be correctly evaluated. Finally, motion analysis tools have to be able to provide in real-time biomechanics quantities usable by ergonomists to analyse and improve the working conditions.

In real working condition, motion analysis and musculoskeletal risks assessment raise also many scientific and technological challenges. Similarly to virtual reality, fidelity of the working process may be affected by the measurement method. Wearing sensors or skin markers, together with the need of frequently calibrating the assessment system may change the way workers perform the tasks. Whatever the measurement is, classical ergonomic assessments generally address one specific parameter, such as posture, or force, or repetitions..., which makes it difficult to design a musculoskeletal risk factor that actually represent this risk. Another key scientific challenge is then to design new indicators that better capture the risk of musculoskeletal disorders. However, this indicator has to deal with the tradeoff between accurate biomechanical assessment and the difficulty to get reliable and the required information in real working conditions.

4.6 Locomotion and Interactions between walkers

Modeling and simulating locomotion and interactions between walkers is a very active, complex and competitive domain, interesting various disciplines such as mathematics, cognitive sciences, physics, computer graphics, rehabilitation etc. Locomotion and interactions between walkers are by definition at the very core of our society since they represent the basic synergies of our daily life. When walking in the street, we should produce a locomotor movement while taking information about our surrounding environment in order to interact with people, move without collision, alone or in a group, intercept, meet or escape to somebody. MimeTIC is an international key contributor in the domain of understanding and simulating locomotion and interactions between walkers. By combining an approach based on Human Movement Sciences and Computer Sciences, the team focuses on locomotor invariants which characterize the generation of locomotor trajectories, conducts challenging experiments focusing on visuo-motor coordination involved during interactions between walkers both using real and virtual set-ups. One main challenge is to consider and model not only the "average" behaviour of healthy younger adult but also extend to specific populations considering the effect of pathology or the effect of age (kids, older adults). As a first example, when patients cannot walk efficiently, in particular those suffering from central nervous system affections, it becomes very useful for practitioners to benefit from an objective evaluation of their capacities. To facilitate such evaluations, we have developed two complementary indices, one based on kinematics and the other one on muscle activations. One major point of our research is that such indices are usually only developed for children whereas adults with these affections are much more numerous. We extend this objective evaluation by using person-person interaction paradigm which allows studying visuo-motor strategies deficit in these specific populations.

Another fundamental question is the adaptation of the walking pattern according to anatomical constraints, such as pathologies in orthopedics, or adaptation to various human and non-human primates in paleoanthropoly. Hence, the question is to predict plausible locomotion according to a given morphology. This question raises fundamental questions about the variables that are regulated to control gait: balance control, minimum energy, minimum jerk...In MimeTIC we develop models and simulators to efficiently test hypothesis on gait control for given morphologies.

5 Highlights of the year

MimeTIC is part of the PIA3 Equipex+ CONTINUUM project leaded by CNRS, to develop and maintain an outstanding national network of VR platforms. This network will support academic research on the continuum between virtual and real world, and its applications.

PIA3 programs to support Sport have been launched in 2019, with two calls. In the second call, in 2020, MimeTIC has been granted for two (among 14 which have been submitted) such PPR projects:

  • BEST Tennis leaded by Benoit Bideau and aiming at enhancing the performance of elite swimmers, with a strong investment of the French Swimming Federation,
  • REVEA leaded by Richard Kulpa, aiming at developing new training sessions based on Virtual Reality for Boxing, Athletism and Gymnastics.

MimeTIC is also leading one of the 24 PIA3 EUR projects (among 81 projects submitted) Digisport. This project aims at creating a graduate school to support multidisciplinary research on sports, with the main Rennes actors in sports sciences, computer sciences, electronics, data sciences, and human and social sciences.

Anne-Hélène Olivier has defended her Habilitation to Direct Research (HDR) December 7th 2020.

5.1 Awards

Rebecca Fribourg (co-supervised with the Hybrid team) received two awards for her contributions on the topic of Avatars and Virtual Embodiment:

  • IEEE Virtual Reality Best Journal Papers Award, for her work Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View.
  • ICAT-EGVE Best Paper Award, for her work Influence of Threat Occurrence and Repeatability on the Sense of Embodiment and Threat Response in VR.

Hugo Brument (co-supervised with the Rainbow and Hybrid teams) received the best paper award 32 in Euro VR international conference for his work Influence of Dynamic Field of View Restrctions on Rotation Gain Perception in Virtual Environments.

6 New software and platforms

6.1 New software

6.1.1 AsymGait

  • Name: Asymmetry index for clinical gait analysis based on depth images
  • Keywords: Motion analysis, Kinect, Clinical analysis
  • Scientific Description: The system uses depth images delivered by the Microsoft Kinect to retrieve the gait cycles first. To this end it is based on a analyzing the knees trajectories instead of the feet to obtain more robust gait event detection. Based on these cycles, the system computes a mean gait cycle model to decrease the effect of noise of the system. Asymmetry is then computed at each frame of the gait cycle as the spatial difference between the left and right parts of the body. This information is computed for each frame of the cycle.
  • Functional Description: AsymGait is a software package that works with Microsoft Kinect data, especially depth images, in order to carry-out clinical gait analysis. First it identifies the main gait events using the depth information (footstrike, toe-off) to isolate gait cycles. Then it computes a continuous asymmetry index within the gait cycle. Asymmetry is viewed as a spatial difference between the two sides of the body.
  • Authors: Edouard Auvinet, Franck Multon
  • Contact: Franck Multon
  • Participants: Edouard Auvinet, Franck Multon

6.1.2 Cinematic Viewpoint Generator

  • Keyword: 3D animation
  • Functional Description: The software, developed as an API, provides a mean to automatically compute a collection of viewpoints over one or two specified geometric entities, in a given 3D scene, at a given time. These viewpoints satisfy classical cinematographic framing conventions and guidelines including different shot scales (from extreme long shot to extreme close-up), different shot angles (internal, external, parallel, apex), and different screen compositions (thirds,fifths, symmetric of di-symmetric). The viewpoints allow to cover the range of possible framings for the specified entities. The computation of such viewpoints relies on a database of framings that are dynamically adapted to the 3D scene by using a manifold parametric representation and guarantee the visibility of the specified entities. The set of viewpoints is also automatically annotated with cinematographic tags such as shot scales, angles, compositions, relative placement of entities, line of interest.
  • Authors: Christophe Lino, Emmanuel Badier, Marc Christie
  • Contact: Marc Christie
  • Participants: Christophe Lino, Emmanuel Badier, Marc Christie
  • Partners: Université d'Udine, Université de Nantes

6.1.3 CusToM

  • Name: Customizable Toolbox for Musculoskeletal simulation
  • Keywords: Biomechanics, Dynamic Analysis, Kinematics, Simulation, Mechanical multi-body systems
  • Scientific Description:

    The present toolbox aims at performing a motion analysis thanks to an inverse dynamics method.

    Before performing motion analysis steps, a musculoskeletal model is generated. Its consists of, first, generating the desire anthropometric model thanks to models libraries. The generated model is then kinematical calibrated by using data of a motion capture. The inverse kinematics step, the inverse dynamics step and the muscle forces estimation step are then successively performed from motion capture and external forces data. Two folders and one script are available on the toolbox root. The Main script collects all the different functions of the motion analysis pipeline. The Functions folder contains all functions used in the toolbox. It is necessary to add this folder and all the subfolders to the Matlab path. The Problems folder is used to contain the different study. The user has to create one subfolder for each new study. Once a new musculoskeletal model is used, a new study is necessary. Different files will be automaticaly generated and saved in this folder. All files located on its root are related to the model and are valuable whatever the motion considered. A new folder will be added for each new motion capture. All files located on a folder are only related to this considered motion.

  • Functional Description: Inverse kinematics Inverse dynamics Muscle forces estimation External forces prediction
  • Publications: hal-02268958, hal-02088913, hal-02109407, hal-01904443, hal-02142288, hal-01988715, hal-01710990
  • Contacts: Antoine Muller, Charles Pontonnier, Georges Dumont
  • Participants: Antoine Muller, Charles Pontonnier, Georges Dumont, Pierre Puchaud, Anthony Sorel, Claire Livet, Louise Demestre

6.1.4 Directors Lens Motion Builder

  • Keywords: Previzualisation, Virtual camera, 3D animation
  • Functional Description: Directors Lens Motion Builder is a software plugin for Autodesk's Motion Builder animation tool. This plugin features a novel workflow to rapidly prototype cinematographic sequences in a 3D scene, and is dedicated to the 3D animation and movie previsualization industries. The workflow integrates the automated computation of viewpoints (using the Cinematic Viewpoint Generator) to interactively explore different framings of the scene, proposes means to interactively control framings in the image space, and proposes a technique to automatically retarget a camera trajectory from one scene to another while enforcing visual properties. The tool also proposes to edit the cinematographic sequence and export the animation. The software can be linked to different virtual camera systems available on the market.
  • Authors: Emmanuel Badier, Christophe Lino, Marc Christie
  • Contact: Marc Christie
  • Participants: Christophe Lino, Emmanuel Badier, Marc Christie
  • Partner: Université de Rennes 1

6.1.5 Kimea

  • Name: Kinect IMprovement for Egronomics Assessment
  • Keywords: Biomechanics, Motion analysis, Kinect
  • Scientific Description: Kimea consists in correcting skeleton data delivered by a Microsoft Kinect in an ergonomics purpose. Kimea is able to manage most of the occlultations that can occur in real working situation, on workstations. To this end, Kimea relies on a database of examples/poses organized as a graph, in order to replace unreliable body segments reconstruction by poses that have already been measured on real subject. The potential pose candidates are used in an optimization framework.
  • Functional Description: Kimea gets Kinect data as input data (skeleton data) and correct most of measurement errors to carry-out ergonomic assessment at workstation.
  • Publications: hal-01612939v1, hal-01393066v1, hal-01332716v1, hal-01332711v2, hal-01095084v1
  • Authors: Pierre Plantard, Franck Multon, Anne-Sophie Le Pierres, Hubert Shum
  • Contact: Franck Multon
  • Participants: Franck Multon, Hubert Shum, Pierre Plantard
  • Partner: Faurecia

6.1.6 Populate

  • Keywords: Behavior modeling, Agent, Scheduling
  • Scientific Description:

    The software provides the following functionalities:

    - A high level XML dialect that is dedicated to the description of agents activities in terms of tasks and sub activities that can be combined with different kind of operators: sequential, without order, interlaced. This dialect also enables the description of time and location constraints associated to tasks.

    - An XML dialect that enables the description of agent's personal characteristics.

    - An informed graph describes the topology of the environment as well as the locations where tasks can be performed. A bridge between TopoPlan and Populate has also been designed. It provides an automatic analysis of an informed 3D environment that is used to generate an informed graph compatible with Populate.

    - The generation of a valid task schedule based on the previously mentioned descriptions.

    With a good configuration of agents characteristics (based on statistics), we demonstrated that tasks schedules produced by Populate are representative of human ones. In conjunction with TopoPlan, it has been used to populate a district of Paris as well as imaginary cities with several thousands of pedestrians navigating in real time.

  • Functional Description: Populate is a toolkit dedicated to task scheduling under time and space constraints in the field of behavioral animation. It is currently used to populate virtual cities with pedestrian performing different kind of activities implying travels between different locations. However the generic aspect of the algorithm and underlying representations enable its use in a wide range of applications that need to link activity, time and space. The main scheduling algorithm relies on the following inputs: an informed environment description, an activity an agent needs to perform and individual characteristics of this agent. The algorithm produces a valid task schedule compatible with time and spatial constraints imposed by the activity description and the environment. In this task schedule, time intervals relating to travel and task fulfillment are identified and locations where tasks should be performed are automatically selected.
  • Contacts: Carl-Johan Jorgensen, Fabrice Lamarche
  • Participants: Carl-Johan Jorgensen, Fabrice Lamarche

6.1.7 PyNimation

  • Keywords: Moving bodies, 3D animation, Synthetic human
  • Scientific Description: PyNimation is a python-based open-source (AGPL) software for editing motion capture data which was initiated because of a lack of open-source software enabling to process different types of motion capture data in a unified way, which typically forces animation pipelines to rely on several commercial software. For instance, motions are captured with a software, retargeted using another one, then edited using a third one, etc. The goal of Pynimation is therefore to bridge the gap in the animation pipeline between motion capture software and final game engines, by handling in a unified way different types of motion capture data, providing standard and novel motion editing solutions, and exporting motion capture data to be compatible with common 3D game engines (e.g., Unity, Unreal). Its goal is also simultaneously to provide support to our research efforts in this area, and it is therefore used, maintained, and extended to progressively include novel motion editing features, as well as to integrate the results of our research projects. At a short term, our goal is to further extend its capabilities and to share it more largely with the animation/research community.
  • Functional Description:

    PyNimation is a framework for editing, visualizing and studying skeletal 3D animations, it was more particularly designed to process motion capture data. It stems from the wish to utilize Python’s data science capabilities and ease of use for human motion research.

    In its version 1.0, Pynimation offers the following functionalities, which aim to evolve with the development of the tool : - Import / Export of FBX, BVH, and MVNX animation file formats - Access and modification of skeletal joint transformations, as well as a certain number of functionalities to manipulate these transformations - Basic features for human motion animation (under development, but including e.g. different methods of inverse kinematics, editing filters, etc.). - Interactive visualisation in OpenGL for animations and objects, including the possibility to animate skinned meshes

  • Authors: Ludovic Hoyet, Robin Adili, Benjamin Niay, Alberto Jovane
  • Contacts: Ludovic Hoyet, Robin Adili

6.1.8 The Theater

  • Keywords: 3D animation, Interactive Scenarios
  • Functional Description: The Theater is a software framework to develop interactive scenarios in virtual 3D environements. The framework provides means to author and orchestrate 3D character behaviors and simulate them in real-time. The tools provides a basis to build a range of 3D applications, from simple simulations with reactive behaviors, to complex storytelling applications including narrative mechanisms such as flashbacks.
  • Contact: Marc Christie
  • Participant: Marc Christie

6.2 New platforms

6.2.1 Immerstar Platform

Participants: Georges Dumont, Ronan Gaugne, Anthony Sorel, Richard Kulpa.

With the two platforms of virtual reality, Immersia ( and Immermove (, grouped under the name Immerstar, the team has access to high level scientific facilities. This equipment benefits the research teams of the center and has allowed them to extend their local, national and international collaborations. The Immerstar platform is granted by a Inria CPER funding for 2015-2019 that enables important evolutions of the equipment. In 2016, the first technical evolutions have been decided and, in 2017, these evolutions have been implemented. On one side, for Immermove, the addition of a third face to the immersive space, and the extension of the Vicon tracking system have been realized and continued this year with 23 new cameras. And, on the second side, for Immersia, the installation of WQXGA laser projectors with augmented global resolution, of a new tracking system with higher frequency and of new computers for simulation and image generation in 2017. In 2018, a Scale One haptic device has been installed. It allows, as in the CPER proposal, one or two handed haptic feedback in the full space covered by Immersia and possibility of carrying the user. We celebrated the twentieth anniversary of the Immersia platform in November 2019 by inaugurating the new haptic equipment. We proposed scientific presentations and received 150 participants, and visits for support services where we received 50 persons. Based on these support, in 2020, we participated to a PIA3-Equipex+ proposal. This proposal CONTINUUM involves 22 partner, has been succesfully evaluated and will be granted. The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding.

7 New results

7.1 Outline

In 2020, MimeTIC has maintained his activity in motion analysis, modelling and simulation, to support the idea that these approaches are strongly coupled in a motion analysis-synthesis loop. This idea has been applied to the main application domains of MimeTIC:

  • Animation, Autonomous Characters and Digital Storytelling,
  • Fidelity of Virtual Reality,
  • Motion sensing of Human Activity,
  • Sports,
  • Ergonomics,
  • and Locomotion and Interactions Between Walkers.

7.2 Animation, Autonomous Characters and Digital Storytelling

MimeTIC main research path consists in associating motion analysis and synthesis to enhance the naturalness in computer animation, with applications in movie previsualisation, and autonomous virtual character control. Thus, we pushed example-based techniques in order to reach a good tradeoff between simulation efficiency and naturalness of the results. In 2020, to achieve this goal, MimeTIC continued to explore the use of perceptual studies and model-based approaches, but also began to investigate deep learning.

7.2.1 Topology-aware Camera Control

Participants: Marc Christie, Amaury Louarn.

Our topology-aware camera control system works as follows: starting from a virtual environment with its navigation mesh in blue (a), a collection of
camera tracks are generated by clustering points obtained via ray casts (green) generated from a topological skeleton representation of the navigation mesh (b).
The camera is then controlled in real-time by a physical system that follows a target on the best camera track in order to film an actor navigating in the environment (c and d)
Figure 4: Our topology-aware camera control system works as follows: starting from a virtual environment with its navigation mesh in blue (a), a collection of camera tracks are generated by clustering points obtained via ray casts (green) generated from a topological skeleton representation of the navigation mesh (b). The camera is then controlled in real-time by a physical system that follows a target on the best camera track in order to film an actor navigating in the environment (c and d)

Placing and moving virtual cameras in real-time 3D environments is a task that remains complex due to the many requirements which need to be satisfied simultaneously. Beyond the essential features of ensuring visibility and frame composition for one or multiple targets, an ideal camera system should provide designers with tools to create variations in camera placement and motions, and create shots which conform to aesthetic recommendations. In this work, we propose a controllable process that will assist developers and artists in placing cinematographic cameras and camera paths throughout complex virtual environments, a task that was often manually performed until now. With no specification and no previous knowledge on the events, our tool exploits a topological analysis of the environment to capture the potential movements of the agents, highlight linearities and create an abstract skeletal representation of the environment. This representation is then exploited to automatically generate potentially relevant camera positions and trajectories organized in a graph representation with visibility information. At run-time, the system can then efficiently select appropriate cameras and trajectories according to artistic recommendations. We demonstrate the features of the proposed system with realistic game-like environments, highlighting the capacity to analyze a complex environment, generate relevant camera positions and camera tracks, and run efficiently with a range of different camera behaviours (see Figure 438.

7.2.2 An interactive staging-and-shooting solver for virtual cinematography

Participants: Marc Christie, Amaury Louarn.

Research in virtual cinematography often narrows the problem down to computing the optimal viewpoint for the camera to properly convey a scene’s content. In contrast we propose to address simultaneously the questions of placing cameras, lights, objects and actors in a virtual environment through a high-level specification. We build on a staging language and propose to extend it by defining complex temporal relationships between these entities. We solve such specifications by designing pruning operators which iteratively reduce the range of possible degrees of freedom for entities while satisfying temporal constraints. Our solver first decomposes the problem by analyzing the graph of relationships between entities and then solves an ordered sequence of sub-problems. Users have the possibility to manipulate the current result for fine-tuning purposes or to creatively explore ranges of solutions while maintaining the relationships. As a result, the proposed system is the first staging-and-shooting cinematography system which enables the specification and solving of spatio-temporal cinematic layouts 39.

7.2.3 Deep Learning of Camera Behaviors

Participants: Marc Christie, Xi Wang.

We propose the design of a camera motion controller which has the ability to automatically extract camera behaviors from different film clips (on the left) and re-apply these behaviors to a 3D animation (center). In this example, three distinct camera trajectories are automatically generated (red, blue and yellow curves) from three different reference clips. Results display viewpoints at 4 specific instants along each camera trajectory demonstrating the capacity of our system to encode and reproduce camera behaviors from distinct input examples.
Figure 5: We propose the design of a camera motion controller which has the ability to automatically extract camera behaviors from different film clips (on the left) and re-apply these behaviors to a 3D animation (center). In this example, three distinct camera trajectories are automatically generated (red, blue and yellow curves) from three different reference clips. Results display viewpoints at 4 specific instants along each camera trajectory demonstrating the capacity of our system to encode and reproduce camera behaviors from distinct input examples.

Designing a camera motion controller that has the capacity to move a virtual camera automatically in relation with contents of a 3D animation, in a cinematographic and principled way, is a complex and challenging task. Many cinematographic rules exist, yet practice shows there are significant stylistic variations in how these can be applied. In this paper, we propose an example-driven camera controller which can extract camera behaviors from an example film clip and re-apply the extracted behaviors to a 3D animation, through learning from a collection of camera motions. Our first technical contribution is the design of a low-dimensional cinematic feature space that captures the essence of a film's cinematic characteristics (camera angle and distance, screen composition and character configurations) and which is coupled with a neural network to automatically extract these cinematic characteristics from real film clips. Our second technical contribution is the design of a cascaded deep-learning architecture trained to (i) recognize a variety of camera motion behaviors from the extracted cinematic features, and (ii) predict the future motion of a virtual camera given a character 3D animation. We propose to rely on a Mixture of Experts (MoE) gating+prediction mechanism to ensure that distinct camera behaviors can be learned while ensuring generalization. We demonstrate the features of our approach through experiments that highlight (i) the quality of our cinematic feature extractor (ii) the capacity to learn a range of behaviors through the gating mechanism, and (iii) the ability to generate a variety of camera motions by applying different behaviors extracted from film clips. Such an example-driven approach offers a high level of controllability which opens new possibilities toward a deeper understanding of cinematographic style and enhanced possibilities in exploiting real film data in virtual environments (see Figure 5). The work is a collaboration with the Beijing Film Academy in China and was presented at SIGGRAPH 2020 22.

7.2.4 Efficient Visibility Computation for Camera Control

Participants: Marc Christie, Ludovic Burg.

Efficient visibility computation is a prominent requirement when designing automated camera control techniques for dynamic 3D environments; computer games, interactive storytelling or 3D media applications all need to track 3D entities while ensuring their visibility and delivering a smooth cinematographic experience. Addressing this problem requires to sample a very large set of potential camera positions and estimate visibility for each of them, which in practice is intractable. In this work, we introduce a novel technique to perform efficient visibility computation and anticipate occlusions. We first propose a GPU-rendering technique to sample visibility in Toric Space coordinates – a parametric space designed for camera control. We then rely on this visibility evaluation to compute an anticipation map which predicts the future visibility of a large set of cameras over a specified number of frames. We finally design a camera motion strategy that exploits this anticipation map to maximize the visibility of entities over time. The key features of our approach are demonstrated through comparison with classical ray-casting techniques on benchmark environments, and through an integration in multiple game-like 3D environment with heavy sparse and dense occluders 14.

7.2.5 Relative Pose Estimation and Planar Reconstruction via Superpixel-Driven Multiple Homographies

Participants: Marc Christie, Xi Wang.

This work proposes a novel method to simultaneously perform relative camera pose estimation and planar reconstruction of a scene from two RGB images. We start by extracting and matching superpixel information from both images and rely on a novel multi-model RANSAC approach to estimate multiple homographies from superpixels and identify matching planes. Ambiguity issues when performing homogra-phy decomposition are handled by proposing a voting system to more reliably estimate relative camera pose and plane parameters. A non-linear optimization process is also proposed to perform bundle adjustment that exploits a joint representation of homographies and works both for image pairs and whole sequences of image (vSLAM). As a result, the approach provides a mean to perform a dense 3D plane reconstruction from two RGB images only without relying on RGB-D inputs or strong priors such as Manhattan assumptions, and can be extented to handle sequences of images. Our results compete with keypoint-based techniques such as ORB-SLAM while providing a dense representation and are more precise than direct and semi-direct pose estimation techniques used in LSD-SLAM or DPPTAM. Results were presented at IRIS 2020 42

7.2.6 Contact Preserving Shape Transfer For Rigging-Free Motion Retargeting

Participants: Franck Multon, Jean Basset.

In 2018, we introduced the idea of context graph to capture the relationship between body parts surfaces and enhance the quality of the motion retargetting problem. Hence, it becomes possible to retarget the motion of a source character to a target one while preserving the topological relationship between body parts surfaces. However this approach implies to strictly satisfy distance constraints between body parts, whereas some of them could be relaxed to preserve naturalness. In 2019, we introduced a new paradigm based on transfering the shape instead of encoding the pose constraints to tackle this problem for isolated poses. In 2020, we extended this approach to handle continuity in motion, and non-human characters.

Hence, in 12, we presented an automatic method that allows to retarget poses from a source to a target character by transferring the shape of the target character onto the desired pose of the source character. By considering shape instead of pose transfer our method allows to better preserve the contextual meaning of the source pose, typically contacts between body parts, than pose-based strategies. To this end, we propose an optimization-based method to deform the source shape in the desired pose using three main energy functions: similarity to the target shape, body part volume preservation, and collision management to preserve existing contacts and prevent penetrations. The results show that our method allows to retarget complex poses with several contacts to different morphologies, and is even able to create new contacts when morphology changes require them, such as increases in body size. To demonstrate the robustness of our approach to different types of shapes, we successfully apply it to basic and dressed human characters as well as wild animal models, without the need to adjust parameters.

7.2.7 Walk Ratio: Perception of an Invariant Parameter of Human Walk on Virtual Characters

Participants: Ludovic Hoyet, Benjamin Niay, Anne-Hélène Olivier, Katja Zibrek.

We are interested in whether observers are able to recognize the natural Walk Ratio of an individual, an invariant parameter of human walking (ratio between step length and step frequency), when motions are displayed on virtual characters. In particular, the Walk Ratio represents the fact that different combinations of step length and step frequency can be selected by people to walk at a given speed, e.g., small steps at a high cadence (left) or longer steps at a lower cadence (right).
Figure 6: We are interested in whether observers are able to recognize the natural Walk Ratio of an individual, an invariant parameter of human walking (ratio between step length and step frequency), when motions are displayed on virtual characters. In particular, the Walk Ratio represents the fact that different combinations of step length and step frequency can be selected by people to walk at a given speed, e.g., small steps at a high cadence (left) or longer steps at a lower cadence (right).

Synthesizing motions that look realistic and diverse is a challenging task in animation. Therefore, a few generic walking motions are typically used when creating crowds of walking virtual characters, leading to a lack of variations as motions are not necessarily adapted to each and every virtual character's characteristics. While some attempts have been made to create variations, it appears necessary to identify the relevant parameters that influence users' perception of such variations to keep a good trade-off between computational costs and realism. In this context, we investigated 40 the ability of viewers to identify an invariant parameter of human walking named the Walk Ratio (step length to step frequency ratio), which was shown to be specific to each individual and constant for different speeds, but which has never been used to drive animations of virtual characters. To this end, we captured 2 female and 2 male actors walking at different freely chosen speeds, as well as at different combinations of step frequency and step length. We then performed a perceptual study to identify the Walk Ratio that was perceived as the most natural for each actor when animating a virtual character, and compared it to the Walk Ratio freely chosen by the actor during the motion capture session (Figure 6). We found that Walk Ratios chosen by observers were in the range of Walk Ratios measured in the literature, and that participants perceived differences between the Walk Ratios of animated male and female characters, as evidenced in the biomechanical literature. Our results provide new considerations to drive the animation of walking virtual characters using the Walk Ratio as a parameter, and might provide animators with novel means to control the walking speed of characters through simple parameters while retaining the naturalness of the locomotion.

7.2.8 The impact of stylization on face recognition

Participants: Ludovic Hoyet, Franck Multon, Nicolas Olivier.

A face with various levels of two (top and bottom) non-human stylization, from low (left) to high (right). The original human face is on the left.
Figure 7: A face with various levels of two (top and bottom) non-human stylization, from low (left) to high (right). The original human face is on the left.

While digital humans are key aspects of the rapidly evolving areas of virtual reality, gaming, and online communications, many applications would benefit from using digital personalized (stylized) representations of users, as they were shown to highly increase immersion, presence and emotional response. In particular, depending on the target application, one may want to look like a dwarf or an elf in a heroic fantasy world, or like an alien on another planet, in accordance with the style of the narrative. While creating such virtual replicas requires stylization of the user’s features onto the virtual character, no formal study has however been conducted to assess the ability to recognize stylized characters. In collaboration with Hybrid team and InterDigital, we carried-out a perceptual study investigating the effect of the degree of stylization on the ability to recognize an actor, and the subjective acceptability of stylizations 41 (Figure 7). Results show that recognition rates decrease when the degree of stylization increases, while acceptability of the stylization increases. These results provide recommendations to achieve good compromises between stylization and recognition, and pave the way to new stylization methods providing a tradeoff between stylization and recognition of the actor.

7.2.9 Interaction Fields: Sketching Collective Behaviours.

Participants: Marc Christie, Adèle Colas, Ludovic Hoyet, Anne-Hélène Olivier, Katja Zibrek.

: Examples of IF applied by the red agent during a
simulation: each yellow agent is moved by the combination of their neighbours’ IF (resulting vector in purple).
Figure 8: : Examples of IF applied by the red agent during a simulation: each yellow agent is moved by the combination of their neighbours’ IF (resulting vector in purple).

Many applications of computer graphics, such as cinema, video games, virtual reality, training scenarios, therapy, or rehabilitation, involve the design of situations where several virtual humans are engaged. In applications where a user is immersed in the virtual environment, the (collective) behavior of these virtual humans must be realistic to improve the user's sense of presence. While expressive behaviour appears to be a crucial aspect as part of the realism, collective behaviours simulated using typical crowd simulation models (e.g., social forces, potential fields, velocity selection, vision-based) usually lack expressiveness and do not allow to capture more subtle scenarios (e.g., a group of agents hiding from the user or blocking his/her way), which require the ability to simulate complex interactions. As subtle and adaptable collective behaviours are not easily modeled, there is therefore a need for more intuitive ways to design such complex scenarios. In this preliminary work 50, conducted in collaboration with Julien Pettré and Claudio Pacchierotti in Rainbow team, we therefore propose a novel approach to sketch such interactions to define collective behaviours. Although other sketch-based approaches exist, these usually focus on goal-oriented path planning, and not on modelling social or collective behaviour. In comparison, we present the concepts of a new approach based on a user-friendly application enabling users to draw target interactions between agents through intuitive vector fields (Figure 8). In the future, our goal is to use this approach to facilitate the design of expressive and collective behaviours. By considering more generic and dynamic situations, we design diversified and subtle interactions, which so far have mostly focused on predefined static scenarios.

7.3 Fidelity of Virtual Reality

MimeTIC wishes to promote the use of Virtual Reality to analyze and train human motor performance. It raises the fundamental question of the transfer of knowledge and skills acquired in VR to real life. In 2020, we maintain our efforts to enhance the experience of users when interacting physically with a virtual world. We developed an original setup to carry-out experiments aiming at evaluating various haptic feedback rendering techniques. In collaboration with Hybrid team, we put many efforts in better simulating avatars of the users, and analyzed embodiment in various VR conditions, leading to several co-supervised PhD students. In collaboration with Rainbow team, we also explored methods to enhance VR experience when navigating in crowds, using perceptual studies, new steering methods, and the possibility to physically interact with other people through haptic feedback.

7.3.1 Biofidelity in VR

Participants: Simon Hilt, Georges Dumont, Charles Pontonnier.

Virtual environments (VE) and haptic interfaces (HI) tend to be introduced as virtual prototyping tools to assess ergonomic features of workstations. These approaches are cost-effective and convenient since working directly on the Digital Mock-Up -and thus allows low-cost solutions for studying the ergonomics of workstations upstream of the design chain, thanks to the interaction of an operator and a digital mock-up- in a VE is preferable to constructing a physical mock-up in a Real Environment (RE). However it can be usable only if the ergonomic conclusions made from the VE are similar to the ones you would make in the real world. The focus was put on the evaluation of the impact of visual and haptic renderings in terms of biomechanical fidelity for pick-and-place tasks. We developed an original setup 9, enabling to separate the effect of visual and haptic renderings in the scene in order to investigate individual and combined effects of those modes of immersion and interaction on the biomechanical behavior of the subject during the task. We focused particularly on the mixed effects of the renderings on the biomechanical fidelity of the system to simulate pick-and-place tasks. Fourteen subjects performed time-constrained pick-and-place tasks in RE and VE with a real and a virtual, haptic driven object at three different speeds. Motion of the hand and muscles activation of the upper limb were recorded. A questionnaire assessed subjectively discomfort and immersion. The results revealed significant differences between measured indicators in RE and VE and with real and virtual object. Objective and subjective measures indicated higher muscle activity and higher length of the hand trajectories in VE and with HI. Another important element is that no cross effect between haptic and visual rendering was reported. Theses results confirmed that such systems should be used with caution for ergonomics evaluation, especially when investigating postural and muscle quantities as discomfort indicators. The last of this part lies in an experimental setup easily replicable to asses more systematically the biomechanical fidelity of virtual environments for ergonomics purposes 20. Haptic feedback is a way to interact whit the digital Digital Mock-Up but the control of the haptic device may affect the feedback. We proposed to evaluate the biomechanical fidelity of a pick-and-place task interacting with a 6 degrees of freedom HI and a dedicated inertial and viscous friction compensation algorithm. The proposed work shows that for such low masses manipulation in the proposed setup 10, the subjective feeling, obtained by a questionnaire, of the user using the HI do not correspond to the muscle activity. Such a result is fundamental to classify what can be transferred from virtual to real at a biomechanical level. The development of additional algorithms is needed to achieve a higher biomechanical fidelity, i.e. the user feels and undergoes the same constraints as the interaction with the same real object 30.

This work is also included in the PhD Thesis of Simon Hilt, defended December 4th 2020  47.

Top left: Experimental setup with two targets for pick-and-place tasks. A cylindrical obstacle 8 cm in diameter and 10 cm high was placed between the targets. The HI (Virtuose6D) was placed close by so that the movement was within its range of accessibility. The subject was equipped with a Vive tracker to record the position and orientation of his hand and with a myo bracelet to record the electromyographic activity of his arm. Finally, the subject was equipped with an HTC Vive Pro helmet during the virtual immersion where the virtual scene was projected. Bottom left: This virtual scene included the same elements as the real scene except for the HI with the addition of a virtual hand collocated thanks to the Vive Tracker. Right: Illustration of the four combinations of immersion/interaction modes.
Figure 9: Top left: Experimental setup with two targets for pick-and-place tasks. A cylindrical obstacle 8 cm in diameter and 10 cm high was placed between the targets. The HI (Virtuose6D) was placed close by so that the movement was within its range of accessibility. The subject was equipped with a Vive tracker to record the position and orientation of his hand and with a myo bracelet to record the electromyographic activity of his arm. Finally, the subject was equipped with an HTC Vive Pro helmet during the virtual immersion where the virtual scene was projected. Bottom left: This virtual scene included the same elements as the real scene except for the HI with the addition of a virtual hand collocated thanks to the Vive Tracker. Right: Illustration of the four combinations of immersion/interaction modes.
Experimental setup to explore subjective perception of haptic manipulation with respect to biomechanical quantities
Figure 10: Experimental setup to explore subjective perception of haptic manipulation with respect to biomechanical quantities

7.3.2 Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View.

Participants: Rebecca Fribourg, Ludovic Hoyet.

The four tasks implemented in the subjective matching experiment with the avatar's appearance at maximum level of realism. From left to right: Punching, Soccer, Fitness and Walking.
Figure 11: The four tasks implemented in the subjective matching experiment with the avatar's appearance at maximum level of realism. From left to right: Punching, Soccer, Fitness and Walking.

In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This work, conducted in collaboration with Ferran Argelaguet and Anatole Lécuyer in Hybrid team, aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment 17. In the presented experiment (n=40), participants had to match a given “optimal” SoE avatar configuration (realistic avatar, full-body motion capture, first-person point of view), starting by a “minimal” SoE configuration (minimal avatar, no control, third-person point of view), by iteratively increasing the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE (Figure 11). This work also included a baseline experiment (n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.

7.3.3 Virtual Co-Embodiment: Evaluation of the Sense of Agency while Sharing the Control of a Virtual Body among Two Individuals

Participants: Rebecca Fribourg, Ludovic Hoyet.

Figure 12: Our “Virtual Co-Embodiment” experience enables a pair of users to be embodied simultaneously in the same virtual avatar (Left). The positions and orientations of the two users are applied to the virtual body of the avatar based on a weighted average, e.g., “User A” with 25% control and “User B” with 75% control over the virtual body (Right).

In this work, conducted in collaboration with Ferran Argelaguet and Anatole Lécuyer in Hybrid team, as well as with Nami Ogawa, Takuji Narumi and Michitaka Hirose from The University of Tokyo (Japan), we introduce a concept called “virtual co-embodiment” 18, which enables users to share their virtual avatar with another entity (e.g., another user, robot, or autonomous agent, see Figure 12). We describe a proof-of-concept in which two users can be immersed from a first-person perspective in a virtual environment and can have complementary levels of control (total, partial, or none) over a shared avatar. In addition, we conducted an experiment to investigate the influence of users' level of control over the shared avatar and prior knowledge of their actions on the users' sense of agency and motor actions. The results showed that participants are good at estimating their real level of control but significantly overestimate their sense of agency when they can anticipate the motion of the avatar. Moreover, participants performed similar body motions regardless of their real control over the avatar. The results also revealed that the internal dimension of the locus of control, which is a personality trait, is negatively correlated with the user's perceived level of control. The combined results unfold a new range of applications in the fields of virtual-reality-based training and collaborative teleoperation, where users would be able to share their virtual body.

7.3.4 Influence of Threat Occurrence and Repeatabilityon the Sense of Embodiment and Threat Response in VR

Participants: Rebecca Fribourg, Ludovic Hoyet.

Overview of the virtual environment representing a factory (left), an avatar representing a user placing an ingot on the plate arrived on the conveyor lay (center) and the crusher threatening the user by suddenly going down while the user's hand is under it.
Figure 13: Overview of the virtual environment representing a factory (left), an avatar representing a user placing an ingot on the plate arrived on the conveyor lay (center) and the crusher threatening the user by suddenly going down while the user's hand is under it.

This work, conducted in collaboration with Ferran Argelaguet and Anatole Lécuyer in Hybrid team, explored the potential impact of threat occurrence and repeatability on users' Sense of Embodiment (SoE) and threat response 37, by exploring the following question: Does virtual threat harm the Virtual Reality experience? (Figure 13). The main findings of our experiment are that the introduction of a threat does not alter users' SoE but might change their behaviour while performing a task after the threat occurrence. In addition, threat repetitions did not show any effect on users' subjective SoE, or subjective and objective responses to threat. Taken together, our results suggest that embodiment studies should expect potential change in participants behaviour while doing a task after a threat was introduced, but that threat introduction and repetition do not seem to impact the subjective measure of the SoE (user responses to questionnaires) nor the objective measure of the SoE (behavioural response to threat towards the virtual body).

7.3.5 Studying the Inter-Relation Between Locomotion Techniques and Embodiment in Virtual Reality

Participants: Diane Dewez, Ludovic Hoyet.

Illustration of the four tasks performed by the participants in our user study, all in the full-body avatar condition. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task.
Figure 14: Illustration of the four tasks performed by the participants in our user study, all in the full-body avatar condition. The same tasks were also performed without an avatar. From left to right: Training task, Corridor task, Path-following task and Columns task.

Virtual embodiment and navigation are two topics widely studied in the virtual reality community. Despite the potential inter-relation between embodiment and locomotion, studies on virtual navigation rarely supply users with a virtual representation, while studies on virtual embodiment rarely allow users to virtually navigate. This work 34, conducted in collaboration with Ferran Argelaguet and Anatole Lécuyer in Hybrid team, explores this potential inter-relation by focusing on the two following questions: Does the locomotion technique have an impact on the user's sense of embodiment? Does embodying an avatar have an impact on the user’s preference and performance depending on the locomotion technique? To address these questions, we conducted a user study (N = 60) exploring the relationship between the locomotion technique and the user's sense of embodiment over a virtual avatar seen from a first-person perspective (Figure 14). Three widely used locomotion techniques were evaluated: real walking, walking-in-place and virtual steering. All participants performed four different tasks involving a different awareness of their virtual avatar. Participants also performed the same tasks without being embodied in an avatar. The results show that participants had a comparable sense of embodiment with all techniques when embodied in an avatar, and that the presence or absence of the virtual avatar did not alter their performance while navigating, independently of the technique. Taken together, our results represent a first attempt to qualify the inter-relation between virtual navigation and virtual embodiment, and suggest that the 3D locomotion technique used has little influence on the user's sense of embodiment in VR.

7.3.6 Eye-Gaze Activity in Crowds: Impact of Virtual Reality and Density

Participants: Florian Berton, Ludovic Hoyet, Anne-Hélène Olivier.

Figure 15: Our objective is to analyze eye-gaze activity within a crowd to better understand walkers' interaction neighborhood and simulate crowd behaviour. We designed two experiments where participants physically walked both in a real and virtual street populated with other walkers, while we measured their eye-gaze activity (red circle). We evaluated the effect of virtual reality on eye-gaze activity by comparing real and virtual conditions (left and middle-right) and investigated the effect of crowd density (right).

When we are walking in crowds, we mainly use visual information to avoid collisions with other pedestrians. Thus, gaze activity should be considered to better understand interactions between people in a crowd. In this work conducted in collaboration with Julien Pettré in Rainbow team, we used Virtual Reality (VR) to facilitate motion and gaze tracking, as well as to accurately control experimental conditions, in order to study the effect of crowd density on eye-gaze behavior 31. Our motivation is to better understand how interaction neighborhood (i.e., the subset of people actually influencing one's locomotion trajectory) changes with density. To this end, we designed two experiments (Figure 15). The first one evaluates the biases introduced by the use of VR on the visual activity when walking among people, by comparing eye-gaze activity while walking in a real and virtual street. We then designed a second experiment where participants walked in a virtual street with different levels of pedestrian density. We demonstrate that gaze fixations are performed at the same frequency despite increases in pedestrian density, while the eyes scan a narrower portion of the street. These results suggest that in such situations walkers focus more on people in front and closer to them. These results provide valuable insights regarding eye-gaze activity during interactions between people in a crowd, and suggest new recommendations in designing more realistic crowd simulations.

7.3.7 Crowd Navigation in VR: exploring haptic rendering of collisions

Participants: Florian Berton, Ludovic Hoyet, Anne-Hélène Olivier.

Our objective is to understand whether and to what extent providing haptic rendering of collisions during navigation through a virtual crowd (right) makes users behave more realistically. Whenever a collision occurs (center), armbands worn on the arms locally vibrate to render this contact (left). We carried out an experiment with 23 participants, testing both subjective and objective metrics regarding the users' path planning, body motion, kinetic energy, presence, and embodiment.
Figure 16: Our objective is to understand whether and to what extent providing haptic rendering of collisions during navigation through a virtual crowd (right) makes users behave more realistically. Whenever a collision occurs (center), armbands worn on the arms locally vibrate to render this contact (left). We carried out an experiment with 23 participants, testing both subjective and objective metrics regarding the users' path planning, body motion, kinetic energy, presence, and embodiment.

Virtual reality (VR) is a valuable experimental tool for studying human movement, including the analysis of interactions during locomotion tasks for developing crowd simulation algorithms. However, these studies are generally limited to distant interactions in crowds, due to the difficulty of rendering realistic sensations of collisions in VR. In this work, conducted in collaboration with Julien Pettré and Claudio Pacchierotti in Rainbow team, we explored the use of wearable haptics to render contacts during virtual crowd navigation 13. We focus on the behavioural changes occurring with or without haptic rendering during a navigation task in a dense crowd, as well as on potential after-effects introduced by the use haptic rendering. Our objective is to provide recommendations for designing VR setup to study crowd navigation behaviour. To this end, we designed an experiment (N=23) where participants navigated in a crowded virtual train station without, then with, and then again without haptic feedback of their collisions with virtual characters (Figure 16). Results show that providing haptic feedback improved the overall realism of the interaction, as participants more actively avoided collisions. We also noticed a significant after-effect in the users' behaviour when haptic rendering was once again disabled in the third part of the experiment. Nonetheless, haptic feedback did not have any significant impact on the users' sense of presence and embodiment.

7.3.8 The effect of gender and attractiveness of motion on proximity in virtual reality

Participants: Ludovic Hoyet, Benjamin Niay, Anne-Hélène Olivier, Katja Zibrek.

Example of tasks in the experiment presented in . Character approaching the participant in the Proximity task (left) and the rating scale in the Attractiveness task (right).
Figure 17: Example of tasks in the experiment presented in 29. Character approaching the participant in the Proximity task (left) and the rating scale in the Attractiveness task (right).

In human interaction, people will keep different distances from each other depending on their gender. For example, males will stand further away from males and closer to females. Previous studies in virtual reality (VR), where people were interacting with virtual humans, showed a similar result. However, many other variables influence proximity, such as appearance characteristics of the virtual character (e.g., attractiveness, etc.). In this work 29, conducted in collaboration with Julien Pettré in Rainbow team and Rachel Mcdonnell from Trinity College Dublin (Ireland), we focused on proximity to virtual walkers, where gender could be recognised from motion only, since previous studies using point-light displays found walking motion is rich in gender cues. In our experiment, a walking wooden mannequin approached the participant embodied in a virtual avatar using the HTC Vive Pro HMD and controller (Figure 17). The mannequin animation was motion captured from several male and female actors and each motion was displayed individually on the character. Participants used the controller to stop the approaching mannequin when they felt it was uncomfortably close to them. Based on previous work, we hypothesized that proximity will be affected by the gender of the character, but unlike previous research, the gender in our experiment could only be determined from character's motion. We also expected differences in proximity according to the gender of the participant. We additionally expected some motions to be rated more attractive than others and that attractive motions would reduce the proximity measure. Our results show support for the last two assumptions, but no difference in proximity was found according to the gender of the character's motion. Our findings have implications for the design of virtual characters in interactive virtual environments.

7.3.9 Effect of social settings on proxemics during social interactions in real and virtual conditions

Participants: Florian Berton, Ludovic Hoyet, Anne-Hélène Olivier, Katja Zibrek.

Figure 18: Illustration of the four social environments used in the experiment to investigate the effect of social settings on proxemics 35.

The modelling of virtual crowds for major events, such as the Oympics in Paris in 2024, takes into account the global proxemics standards of individuals without questioning the possible variability of these standards according to the space in which the interactions are performed. We know that body interactions (Goffman, 1974) are subject to rules whose variability is, at least in part, cultural (Hall, 1971). Obviously, these proxemics standards also address practical issues such as available space and space occupancy density. Our objective in this study was to understand the conditions which can explain that the discomfort felt and the adaptive behaviour performed differ when the interaction takes place in the same city and in spaces with identical occupancy densities. Especially, we focused on the effect of the social context of the environement. We aim at estimating the extent to which the prospect of attending a sports performance alters sensitivity to the transgression of proxemics norms. An additional objective was to evaluate whether virtual reality can help us to provide new insights in such a social context, where objective measures out-of-the lab are complex to perform. To answer this question, we designed in collaboration with Julien Pettré (Rainbow team) and colleagues in the field of sociology François Le Yondre, Théo Rougant and Tristan Duverne (Univ Rennes II) an experiment (in real context and then in virtual reality) in two different locations: a train station and the surroundings of a stadium before a league 1 football match) but with similar densities 35 (Figure 18). The task performed by a confederate was to walk and stand excessively close to men aged 20 to 40. The individual's behaviour (not conscious of being a subject of the experiment) was observed by ethnography and explanatory interviews were conducted immediately afterwards. This same experiment was carried out in virtual reality conditions on the same type of population, modelling the two spaces and making it possible to acquire more precise and quantifiable data than in real conditions such as distances, travel time and eye fixations. Our results suggest that proxemics norms vary according to the subjective relationship of the individual to the social settings, where discomfort was much higher in the train station than in a sport context. This variation would translate directly into a modulation of bodily sensitivity to the proximity of the body of others. While we were able to show that social norms still exist in VR, our results did not show a main effect of the social settings on participants' sensitivity to the transgression of proxemics norms. From a methodological point of view, explicit interviews make it possible to usefully identify the reasons why virtual reality does not generate the same reactions, although it sometimes provokes the same sensitivity. We discuss our results in the frame of the cross-fertilization between Sociology and VR.

7.3.10 Characterization and Evaluation of Control Laws for Virtual Steering Navigation

Participants: Anne-Hélène Olivier, Hugo Brument.

This work was performed in collaboration with Ferran Argelaguet and Maud Marchal from Hybrid and Rainbow teams 33.

This work aims at investigating the influence of the control law in virtual steering techniques, and in particular the speed update, on users' behaviour while navigating in virtual environments. To this end, we first propose to characterize existing control laws. Then, we designed a user study to evaluate the impact of the control law on users' behaviour and performance in a navigation task. Participants had to perform a virtual slalom while wearing a head-mounted display. They were following three different sinusoidal-like trajectory (with low, medium and high curvature) using a torso-steering navigation technique with three different control laws (constant, linear and adaptive). The adaptive control law, based on the biomechanics of human walking, takes into account the relation between speed and curvature. We propose a spatial and temporal analysis of the trajectories performed both in the virtual and the real environment. The results show that users' trajectories and behaviors were significantly affected by the shape of the trajectory but also by the control law. In particular, users' angular velocity was higher with constant and linear laws compared to the adaptive law. The analysis of subjective feedback suggests that these differences might result in a lower perceived physical demand and effort for the adaptive control law. This works discussed the potential applications of such results to improve the design and evaluation of navigation control laws.

7.3.11 Influence of Dynamic Field of View Restrictions on Rotation Gain Perception in Virtual Environments

Participants: Anne-Hélène Olivier, Hugo Brument.

This work was performed in collaboration with Ferran Argelaguet and Maud Marchal from Hybrid and Rainbow teams 32.

Illustration of the 4 different FoV restrictions (vignetting) during the same rightwards rotation: (top left) Horizontal Luminance; (top right) Global Luminance; (bottom left) Horizontal Blur; (bottom right) Global Blur.
Figure 19: Illustration of the 4 different FoV restrictions (vignetting) during the same rightwards rotation: (top left) Horizontal Luminance; (top right) Global Luminance; (bottom left) Horizontal Blur; (bottom right) Global Blur.

The perception of rotation gain, defined as a modification of the virtual rotation with respect to the real rotation, has been widely studied to determine detection thresholds and widely applied to redirected navigation techniques. In contrast, Field of View (FoV) restrictions have been explored in virtual reality as a mitigation strategy for motion sickness, although they can alter user's perception and navigation performance in virtual environments. This work explores whether the use of dynamic FoV manipulations, referred also as vignetting, could alter the perception of rotation gains during virtual rotations in virtual environments (VEs). We conducted a study to estimate and compare perceptual thresholds of rotation gains while varying the vignetting type (no vignetting, horizontal and global vignetting) and the vignetting effect (luminance or blur) as illustrated on Figure 19. 24 Participants performed 60 or 90 degrees virtual rotations in a virtual forest, with different rotation gains applied. Participants have to choose whether or not the virtual rotation was greater than the physical one. Results showed that the point of subjective equality was different across the vignetting types, but not across the vignetting effect or the turns. Subjective questionnaires indicated that vignetting seems less comfortable than the baseline condition to perform the task. These results open applications to improve the design of vignetting for redirection techniques.

7.4 Sports

MimeTIC promotes the idea of coupling motion analysis and synthesis in various domains, especially sports. More specifically, we have a long experience and international leadership in using Virtual Reality for analyzing and training sports performance. In 2020, we continued to explore 1) how enhancing the use of VR to design original training system, capable of accurately analyzing the correlation between covert attention and EEG signals in soccer goalkeeper, and 2) apply biomechanical analysis to identify risks of injuries in various sports, such as tennis.

7.4.1 Uncovering EEG Correlates of Covert Attention in Soccer Goalkeepers: Towards Innovative Sport Training Procedures

Participants: Richard Kulpa, Benoit Bideau.

Advances in sports sciences and neurosciences offer new opportunities to design efficient and motivating sport training tools. For instance, using NeuroFeedback (NF), athletes can learn to self-regulate specific brain rhythms and consequently improve their performances. In 21, we focused on soccer goalkeepers’ Covert Visual Spatial Attention (CVSA) abilities, which are essential for these athletes to reach high performances. We looked for Electroencephalography (EEG) markers of CVSA usable for virtual reality-based NF training procedures, i.e., markers that comply with the following criteria: (1) specific to CVSA, (2) detectable in real-time and (3) related to goalkeepers’ performance/expertise. Our results revealed that the best-known EEG marker of CVSA—increased α-power ipsilateral to the attended hemi-field— was not usable since it did not comply with criteria 2 and 3. Nonetheless, we highlighted a significant positive correlation between athletes’ improvement in CVSA abilities and the increase of their α-power at rest. While the specificity of this marker remains to be demonstrated, it complied with both criteria 2 and 3. This result suggests that it may be possible to design innovative ecological training procedures for goalkeepers, for instance using a combination of NF and cognitive tasks performed in virtual reality.

7.4.2 Training team ball sports in VR

Participants: Richard Kulpa, Benoit Bideau, Charles Faure, Annabelle Limballe.

Virtual reality (VR) is a widespread technology drawing an increasing interest for players and coaches, especially in team ball sports as it offers a simple tool to simulate, analyse and train situations that are often too complex to reproduce in the field. In this review 16 we aimed at (1) providing an overview of methodologies and outcomes of research studies using VR in team ball sports; (2) better evaluating the potential interest of VR to analyse or train team ball sports situation and (3) identifying limitations, gaps in knowledge and remaining scientific challenges. The MEDLINE and Web of Science Core Collection databases were searched, using predefined combinations of keywords. Thirty articles were retained and analysed. VR can be an interesting tool to assess or train team ball sports skills/situations as it allows researchers to control and standardise situations and focus on specific skills/subskills. Studies that used VR in team ball sports still have some limitations, mainly due to technical issues or study design. This review was the opportunity to describe how VR should be used to enhance understanding of performance in team ball sports. Additional suggestions for future research and study design were proposed.

7.4.3 Impact of open stance in tennis serve on performance an injury

Participants: Richard Kulpa, Benoit Bideau, Pierre Touzard, Anthony Sorel.

The open stance forehand has been hypothesized to be more traumatic for injuries in tennis than the neutral stance forehand. we worked on the comparison of kinematics and kinetics at the knee) and hip (15) during three common forehand stroke stances (attacking neutral stance ANS, attacking open stance AOS, defensive open stance DOS) to determine if the open stance forehand induces higher loadings and to discuss its potential relationship with given injuries. The results showed that the DOS increased hip joint angles and loading, thus potentially increasing the risk of hip overuse injuries. The DOS-induced hip motion could put players at a higher risk of posterior-superior hip impingement compared with the ANS and AOS. The results also showed that the DOS increases vertical GRF, maximum knee flexion and abduction angles, range of knee flexion-extension, peak of compressive, distractive and medial knee forces, peak of knee abduction and external rotation torques. Consequently, the DOS appears potentially more at risk for given knee injuries.

7.5 Ergonomics

Ergonomics has become an important application domain in MimeTIC: being able to capture, analyze, and model human performance at work. In this domain, key challenge consists in using limited equipment to capture the physical activity of workers in real conditions. Hence, in 2020, we have designed a new approach to predict external forces using mainly motion capture data, and to personnalize the biomechanical capabilities (maximum feasible force/torque) of specific population. Knowing the forces and postures of the user, a key problem consists then in quantifying the discomfort of the corresponding task. We explored machine learning techniques to design such a discomfort function based on biomechanical data.

7.5.1 Motion-based Prediction of External Forces

Participants: Charles Pontonnier, Georges Dumont, Claire Livet, Anthony Sorel, Nicolas Bideau.

We proposed 25 a method to predict the external efforts exerted on a subject during handling tasks, only with a measure of his motion. These efforts are the contacts forces and moments on the ground and on the load carried by the subject. The method is based on a contact model initially developed to predict the ground reaction forces and moments. Discrete contact points are defined on the biomechanical model at the feet and the hands. An optimization technique computes the minimal forces at each of these points satisfying the dynamic equations of the biomechanical model and the load. The method was tested on a set of asymmetric handling tasks performed by 13 subjects and validated using force platforms and an instrumented load. For each task, predictions of the vertical forces obtained a RMSE of about 0.25 N/kg for the feet contacts and below 1 N/kg for the hands contacts. This method enables to quantitatively assess asymmetric handling tasks on the basis of kinetics variables without additional instrumentation such as force sensors and thus improve the ecological aspect of the studied tasks. We evaluated this method 26 on manual material handling (MMH) tasks. From a set of hypothesized contact points between the subject and the environment (ground and load), external forces were calculated as the minimal forces at each contact point while ensuring the dynamics equilibrium. Ground reaction forces and moments (GRF&M) and load contact forces and moments (LCF&M) were computed from motion data alone. With an inverse dynamics method, the predicted data were then used to compute kinetic variables such as back loading. On a cohort of 65 subjects performing MMH tasks, the mean correlation coefficients between predicted and experimentally measured GRF for the vertical, antero-posterior and medio-lateral components were 0.91 (0.08), 0.95 (0.03) and 0.94 (0.08), respectively. The associated RMSE were 0.51 N/kg, 0.22 N/kg and 0.19 N/kg. The correlation coefficient between L5/S1 joint moments computed from predicted and measured data was 0.95 with a RMSE of 14 Nm for the flexion / extension component. This method thus allows the assessment of MMH tasks without force platforms, which increases the ecological aspect of the tasks studied and enables performance of dynamic analyses in real settings outside the laboratory.

7.5.2 Posture Assessment and Subjective Scale Agreement in Picking Tasks with Low Masses

Participants: Franck Multon, Charles Pontonnier, Georges Dumont, Olfa Haj Mahmoud.

This work 44 aims at analyzing the relationship between postural assessment and perceived discomfort for picking tasks with low mass (1kg), involving a wide range of positions/postures. We analyzed the agreement of the different postural scores (mean value, integral value, root mean square value, weighted average time at each RULA level and the % of time per RULA level) with the subjective assessments. The statistical analysis showed no correlation between subjective and postural scores. A few negative correlations were also noticed, especially for time spent at specific levels of postural discomfort compared to subjective feedback. The results showed that the subjective assessment was not correlated with the postural assessment in such low discriminant tasks. Although postural assessment enabled to discriminate the more difficult postures with regard to the experimental conditions, the subjects were unable to report coherent discomfort feedback. This work demonstrates the complexity of modeling the subjective feedback of subjects to report perceived discomfort. Furure works will evaluate how non-linear models could better approximate this feedback.

7.5.3 Generic and specific musculoskeletal models of the french soldier to support its activity

Participants: Charles Pontonnier, Georges Dumont, Nicolas Bideau, Pierre Puchaud, Simon Kirchhofer.

This work aimed at developing generic and specific musculoskeletal representations of the french soldier. Such models are useful to define the biomechanical specificities of a given population and therefore characterize their needs in terms of physical support. More into details, we aimed at developing biomechanical specifications for locomotion (lower limbs) exoskeleton to support the soldier in mission. Soldiers are widely subject to musculoskeletal disorders, with regard to the physical needs to their mission (load, mission length, ...). From an anthrometric database of the french army, containing 122 anatomical information per subject, we were able to create i) representative musculoskeletal models thanks to k-means clustering and geometrical retargeting 28 ii) efficient geometrical scaling rules based on a limited set of measures 45. These results are useful to enable to extend the characterization of this population and to propose relevant exoskeleton architectures with regard to the tasks to assist.

This work is also included in the PhD Thesis of Pierre Puchaud, defended December 10th 2020 48.

7.6 Locomotion and Interactions between Walkers

MimeTIC is a leader in the study and modeling of walkers' visuo-motor strategies. This implies to understand how humans generate their walking trajectories within an environment. In 2020, we proposed a new simulation approach to compute plausible cyclic and symmetrical gaits based on anatomical description only. We also designed innovative protocols to evaluate the efficiency of various podiatric methods on gait correction. We also used VR as a mean to analyze how pedestrians avoid each other in curvilinear trajectories, which was difficult to carry-out in real experiments.

7.6.1 Prediction of plausible locomotion using nonlinear kinematic optimization

Participants: Franck Multon, Nils Hareng.

Predicting the locomotion of given anatomical structure is a key problem in many applications, such as anthropometry, sports, or rehabilitation. The challenge consists in selecting a plausible sequence of poses among the inifinite space of possible poses to walk from one footprinyt tto another one, knowing the anatomical structure of the character.

In this work 19, we addressed this problem with a target application in palaeoanthropology. Previous works have been proposed to optimize the relative trajectory of the ankle in the pelvis reference frame with per-frame inverse kinematics to retrieve the joint angles. However, this approach can lead to asymmetrical, non-periodic motion, and does not take the dynamics of the system into account. Dynamic simulation of bipedal gait has mostly been investigated in robotics using optimal control. It generally requires knowledge on the motion pattern (named walking pattern generator) and may lead to unnatural (i.e., robotic) motion. In this article, we proposed to adapt previous works to compute symmetric and periodic plausible gait for a given skeletal model, with a minimal set of hypotheses.

7.6.2 Clinical evaluation of plantar exteroceptive inefficiency: normalisation of plantar quotient

Participants: Anne-Hélène Olivier, Armel Crétual, Carole Puil.

Understanding motor control is essential for clinical applications. In the context of podiatry, it is fundamental to evaluate the contribution of sensory systems on the regulation of posture and gait to diagnose and adapt patient’s treatment. While visual system has been extensively studied, cutaneous afferents, which are at the very core of podiatry, are challenging to evaluate. Several approaches have been proposed to quantify the role of somesthesis plantar information including electric stimulation, anesthesia and foam interposition between ground and feet. Being the easiest to implement, the most common approach is the foam interposition. By recording the Center of Pressure (CoP) movement, authors showed that standing on a foam challenges balance, increasing postural sway and speed. Derivated from these results, clinicians use the Plantar Quotient to evaluate postural control. It is defined as the ratio between the CoP surface area on foam and the CoP surface area on normal ground. It is supposed to be above 100 in healthy people. Using a specific foam named Depron, Foisy and Kapoula proposed the first study which highlighted that patients with plantar exteroceptive inefficiency were more stabilized on foam, with a ratio below 100 percent. However, there is a lack of consensus in the characteristics of the foam used to perform such an evaluation. While there are nearly as much foams as experiments conducted on this ratio, it was shown that foam density as well as its thickness have an impact on posture. Besides, Depron® is not a clinical foam (thermal insulator), and is a single use material. As foam characteristics influence posture, the objective of this project 46 is to quantify the effect of foam on the Plantar Quotient in order to standardize podiatrist practices and allow inter-practitioner follow-up, and a fair comparison between studies. This will bring knowledge on foam impact on standing balance. It will provide information to normalize the use of Plantar Quotient. Being able to identify a plantar exteroceptive inefficiency population would improve the diagnosis and the treatment of patients with postural deficiencies.

7.6.3 Collision Avoidance between Walkers on a Curvilinear Path

Participants: Anne-Hélène Olivier, Armel Crétual, Richard Kulpa, Anthony Sorel.

Navigating crowded community spaces requires interactions with pedestrians that follow rectilinear and curvilinear trajectories. In the case of rectilinear trajectories, it has been shown that the perceived action-opportunities of the walkers might be afforded based on a future distance of closest approach. However, little is known about collision avoidance behaviours when avoiding walkers that follow curvilinear trajectories.

Layout of the virtual environment used in this project: Left, a top-down view of the environment, the starting position is highlighted as a red point, the goal is highlighted by a green cylinder, and virtual walker was visible after exiting the corridor. The corridor served to block the view of the environment and the triangles are representative of the direction of path only. Right, a third person perspective of a participant interacting with a virtual walker after exiting the corridor.
Figure 20: Layout of the virtual environment used in this project: Left, a top-down view of the environment, the starting position is highlighted as a red point, the goal is highlighted by a green cylinder, and virtual walker was visible after exiting the corridor. The corridor served to block the view of the environment and the triangles are representative of the direction of path only. Right, a third person perspective of a participant interacting with a virtual walker after exiting the corridor.

In this project 23 22 participants were immersed in a virtual environment and avoided a virtual human (VH) that followed either a rectilinear path or a curvilinear path with a 5 m or 10 m radius curve at various distances of closest approach as illustrated in Figure 20. For a curvilinear path with a 5 m radius there were significantly more collisions when the VH approached from behind the participant, and significantly more inversions of the initial crossing order when the VH approached from in-front than the control rectilinear path. During each trial, the evolution of the future distance of closest approach showed similarities between rectilinear paths and curvilinear paths with a 10 m radius curve. Overall, with few collisions and few inversions of crossing order we can conclude that participants were capable of predicting future distance of closest approach of virtual walkers that followed curvilinear trajectories. The task was solved with similar avoidance adaptations to those observed for rectilinear interactions. Future work should consider how acceleration is perceived and used during a collision avoidance task.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Cifre Faurecia - Monitoring of gestual efficiency at work

Participant: Franck Multon, Georges Dumont, Charles Pontonnier, Olfa Haj Mahmoud.

This Cifre contract has started in September 2018 for three years and is funding the PhD thesis of Olfa Haj Mamhoud. It consists in designing new methods based on depth cameras to monitor the activity of workers in production lines, compute the potential risk of musculoskeletal disorders, and efficiency compared to reference workers. It raises several fundamental questions, such as adapting previous methods to assess the risk of musculoskeletal disorders, as they generally rely on static poses whereas the worker is performing motion. Based on previous works in the team (previous Cifre PhD thesis of Pierre Plantard) we will provide 30Hz motion capture of the worker, that will enable us to evaluate various time-dependent assessment methods.

We will also explore how to estimate joint forces based and torques on such noisy and low-sampling motion data. We will then define a new assessment method based on these forces and torques.

The Cifre contracts funds the PhD salary and 10K€ per year for the supervision and management of the PhD thesis.

Cifre InterDigitial - Adaptive Avatar Customization for Immersive Experiences

Participant: Franck Multon, Ludovic Hoyet, Nicolas Olivier.

This Cifre contract has started in February 2019 for three years and is funding the PhD thesis of Nicolas Olivier. The aim of the project is to design stylized avatars of users in immersive environment and digital arts such as videogames or cinema.

To this end, we will design a pipeline from motion and shape capture of the user to the simulation of the 3D real-time and stylized avatar. It will take hairs, eyes, face, body shape and motion into account. The key idea is to stylized both appaearance and motion to make avatar better correspond to the style of the movie of immersive experience. We will carry-out perceptual studies to better understand the expectation of the users when controlling stylized avatars, to maximize embodiment. The Cifre contracts funds the PhD salary and 15K€ per year for the supervision and management of the PhD thesis. This contract is also in collaboration with Hybrid team.

Cifre InterDigitial - Learning-Based Human Character Animation Synthesis for Content Production

Participant: Ludovic Hoyet, Lucas Mourot.

The overall objective of the PhD thesis of Lucas Mourot, which started in June 2020, is to adapt and improve the state-of-art on video animation and human motion modelling to develop a semi-automated framework for human animation synthesis that brings real benefits to artists in the movie and advertising industry. In particular, one objective is to leverage the use of novel learning-based approaches, in order to propose skeleton-based animation representations, as well as editing tools, in order to improve the resolution and accuracy of the produced animations, so that automatically synthetized animations might become usable in an interactive way by animation artists.

8.2 Bilateral grants with industry

Collaboration with company SolidAnim (Bordeaux, France)

Participant: Marc Christie, Xi Wang.

This contract started in November 2019 for three years. Its purpose is to explore novel means of performing depth detection for augmented reality applied to the film and broadcast industries. The grant serves to fund the PhD of Xi Wang. (160kE)

Collaboration Unreal company (Unreal Megagrant)

Participant: Marc Christie.

This contract started in September 2020 for two years. The objective is to explore means of designing novel VR manipulators for character animation tasks in Unreal Engine. (70kE)

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Inria associate team not involved in an IIL


  • Title: from BEhavioral Analysis to modeling and simulation of interactions between walkeRs
  • Duration: 2019 - 2021
  • Coordinator: Anne-Hélène Olivier
  • Partners:
    • Wilfrid Laurier University (Canada)
  • Inria contact: Anne-Hélène Olivier
  • Summary:

    Interactions between individuals are by definition at the very core of our society since they represent the basic synergies of our daily life. When walking in the street or in more dynamical and strategic situations such as sports motion, we take in information about our surrounding environment in order to interact with people, move without collision, alone or in a group, intercept, meet or escape other people. In this context, the BEAR project is a collaboration between researchers from Inria Rennes (Computer Sciences) and Waterloo universities (Kinesiology-Neuroscience). The project aims at providing more realistic models and simulations of interactions between pedestrians, for various applications such as rehabilitation, computer graphics, or robotics. The originality of the project is to investigate the complexity of human interactions from a human motor control perspective, considering the strong coupling between pedestrians’ visual perception and their locomotor adaptations. We will investigate how people gather the relevant information to control their motion. To provide generic models considering the inter-individual variability of humans, we will consider both normal populations and specific populations (children, older adults, injured, diseased ...) for whom an altered perception can modify their motion. The success of this project will be ensured by the strong complementarity of the teams involved. While all researchers will equally perform experiments on interactions between pedestrians, the researchers from Waterloo will take the lead to identify the relevant behavioral variables that will be used mainly by the researchers from Rennes to design the new models and simulations.


9.2 European initiatives

9.2.1 FP7 & H2020 Projects


  • Title: Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling
  • Duration: 2020 - 2022
  • Coordinator: Ubisoft Film & Television
  • Partners:
    • INTERDIGITAL (France)
  • Inria contact: Marc Christie
  • Summary: The world of animation, the art of making inanimate objects appear to move, has come a long way over the hundred or so years since the first animated films were produced. In the digital age, avatars have become ubiquitous. These numerical representations of real human forms burst on the scene in modern video games and are now used in feature films as well as virtual reality and augmented reality entertainment. Given the huge market for avatar-based digital entertainment, the EU-funded INVICTUS project is developing digital design tools based on volumetric capture to help authors create and edit avatars and their associated story components (decors and layouts) by reducing manual labour, speeding up development and spurring innovation.


  • Title: Creating Lively Interactive Populated Environments
  • Duration: 2020 - 2024
  • Coordinator: University of Cyprus
  • Partners:
    • University of Cyprus (CY)
    • Universitat Politecnica de Catalunya (ES)
    • INRIA (FR)
    • University College London (UK)
    • Trinity College Dublin (IE)
    • Max Planck Institute for Intelligent Systems (DE)
    • KTH Royal Institute of Technology, Stockholm (SE)
    • Ecole Polytechnique (FR)
    • Silversky3d (CY)
  • Inria contact: Julien Pettré, team Rainbow
  • Summary: CLIPE is an Innovative Training Network (ITN) funded by the Marie Skłodowska-Curie program of the European Commission. The primary objective of CLIPE is to train a generation of innovators and researchers in the field of virtual characters simulation and animation. Advances in technology are pushing towards making VR/AR worlds a daily experience. Whilst virtual characters are an important component of these worlds, bringing them to life and giving them interaction and communication abilities requires highly specialized programming combined with artistic skills, and considerable investments: millions spent on countless coders and designers to develop video-games is a typical example. The research objective of CLIPE is to design the next-generation of VR-ready characters. CLIPE is addressing the most important current aspects of the problem, making the characters capable of: behaving more naturally; interacting with real users sharing a virtual experience with them; being more intuitively and extensively controllable for virtual worlds designers. To meet our objectives, the CLIPE consortium gathers some of the main European actors in the field of VR/AR, computer graphics, computer animation, psychology and perception. CLIPE also extends its partnership to key industrial actors of populated virtual worlds, giving students the ability to explore new application fields and start collaborations beyond academia. This work is performed in collaboration with Julien Pettré from Rainbow team.


  • Title: CrowdDNA
  • Duration: 2020 - 2024
  • Coordinator: Inria
  • Partners:
    • Inria (Fr)
    • ONHYS (FR)
    • University of Leeds (UK)
    • Crowd Dynamics (UK)
    • Universidad Rey Juan Carlos (ES)
    • Forschungszentrum Jülich (DE)
    • Universität Ulm (DE)
  • Inria contact: Julien Pettré, team Rainbow
  • Summary: This project aims to enable a new generation of “crowd technologies”, i.e., a system that can prevent deaths, minimize discomfort and maximize efficiency in the management of crowds. It performs an analysis of crowd behaviour to estimate the characteristics essential to understand its current state and predict its evolution. CrowdDNA is particularly concerned with the dangers and discomforts associated with very high-density crowds such as those that occur at cultural or sporting events or public transport systems. The main idea behind CrowdDNA is that analysis of new kind of macroscopic features of a crowd – such as the apparent motion field that can be efficiently measured in real mass events – can reveal valuable information about its internal structure, provide a precise estimate of a crowd state at the microscopic level, and more importantly, predict its potential to generate dangerous crowd movements. This way of understanding low-level states from high-level observations is similar to that humans can tell a lot about the physical properties of a given object just by looking at it, without touching. CrowdDNA challenges the existing paradigms which rely on simulation technologies to analyse and predict crowds, and also require complex estimations of many features such as density, counting or individual features to calibrate simulations. This vision raises one main scientific challenge, which can be summarized as the need for a deep understanding of the numerical relations between the local – microscopic – scale of crowd behaviours (e.g., contact and pushes at the limb scale) and the global – macroscopic – scale, i.e. the entire crowd. This work is performed in collaboration with Julien Pettré from Rainbow team.


  • Title: Photoreal REaltime Sentient ENTity
  • Duration: 2019 - 2023
  • Coordinator: Universitat Pompeu Fabra
  • Partners:
    • Framestore (UK)
    • Brainstorm (ES)
    • Cubic Motion (UK)
    • InfoCert (IT)
    • Universitat Pompeu Fabra (ES)
    • Universität Augsburg (DE)
    • Inria (FR)
    • CREW (BE)
  • Inria contact: Julien Pettré, team Rainbow
  • Summary: PRESENT is a three-year Research and Innovation project to create virtual digital companions –– embodied agents –– that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media. There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and –– above all -— to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour. PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. This work is performed in collaboration with Julien Pettré from Rainbow team.


  • Title: Safeguarding the Cultural HEritage of Dance through Augmented Reality
  • Duration: June 2018- June 2022
  • Coordinator: Cyprus University
  • Partners:
    • ALGOLYSIS LTD (Cyprus)
  • Inria contact: Franck MULTON
  • Summary: Dance is an integral part of any culture. Through its choreography and costumes dance imparts richness and uniqueness to that culture. Over the last decade, technological developments have been exploited to record, curate, remediate, provide access, preserve and protect tangible CH. However, intangible assets, such as dance, has largely been excluded from this previous work. Recent computing advances have enabled the accurate 3D digitization of human motion. Such systems provide a new means for capturing, preserving and subsequently re-creating ICH which goes far beyond traditional written or imaging approaches. However, 3D motion data is expensive to create and maintain, encompassed semantic information is difficult to extract and formulate, and current software tools to search and visualize this data are too complex for most end-users. SCHEDAR will provide novel solutions to the three key challenges of archiving, re-using and re-purposing, and ultimately disseminating ICH motion data. In addition, we will devise a comprehensive set of new guidelines, a framework and software tools for leveraging existing ICH motion databases. Data acquisition will be undertaken holistically; encompassing data related to the performance, the performer, the kind of the dance, the hidden/untold story, etc. Innovative use of state-of-the-art multisensory Augmented Reality technology will enable direct interaction with the dance, providing new experiences and training in traditional dance which is key to ensure this rich culture asset is preserved for future generations. MimeTIC is responsible for WP3 "Dance Data Enhancement".

9.3 National initiatives

9.3.1 PIA

The Priority Research Program (PPR) "Sport de très haute performance" (STHP) was set up in preparation for the 2024 Paris Olympic and Paralympic Games. It mobilizes the scientific community to meet the needs of high-performance athletes in order to achieve the highest possible performance. With a budget of 20 million euros, it is financed by the Future Investment Program and scientifically steered by the CNRS. In this call, the 2 projects we led were founded among the 6 nationally selected:

  • Tennis is a sport with high medal potential at the Olympic and Paralympic Games (JOP) with 9 events on the program (men's and women's singles, men's and women's doubles and mixed, the latter only for able-bodied athletes). France is in the top 3 countries with the most medals in tennis at the Games since 1988. The aim of the BEST - TENNIS project is to optimize the service and return of service performance of the French Tennis Federation's players (able-bodied and wheelchair) through a systemic approach, capitalizing on biomechanical, clinical and cognitive data that will be made available to coaches and athletes through dedicated tools.
  • Virtual reality offers a range of stimuli that go beyond the limits of reality, such as facing an opponent with extraordinary abilities or experiencing an action (with visual, auditory and haptic perceptions in an immersive context) that has not yet been mastered. The REVEA project, supported by the Athletics, Boxing and Gymnastics federations, aims to meet the needs of athletes and coaches by exploiting the unique properties of virtual reality to improve athletes' motor performance through the optimization of the underlying perceptual-motor and cognitive-motor processes, while potentially reducing the risk of injuries related to training overload.

9.3.2 ANR

ANR PRC Capacities

Participants: Charles Pontonnier, Georges Dumont, Pierre Puchaud, Claire Livet, Anthony Sorel.

This project is leaded by Christophe Sauret, from INI/CERAH. The project objective is to build a series of biomechanical indices characterizing the biomechanical difficulty for a wide range of urban environmental situations. These indices will rely on different biomechanical parameters such as proximity to joint limits, forces applied on the handrims, mechanical work, muscle and articular stresses, etc. The definition of a more comprehensive index, called Comprehensive BioMechanical (CBM) cost, including several of the previous indices, will also be a challenging objective. The results of this project would then be used in the first place in VALMOBILE application to assist MWC users in selecting optimal route in Valenciennes agglomeration (project founded by the French National Agency for Urban Renewal and the North Department of France). The MimeTIC team is involved on the musculoskeletal simulation issues and the biomechanical costs definition.


Participants: Ludovic Hoyet, Benjamin Niay, Anne-Hélène Olivier, Richard Kulpa, Franck Multon.

Per2 is a 42-month ANR JCJC project (2018-2022) entitled Perception-based Human Motion Personalisation (Budget: 280kE; website:

The objective of this project is to focus on how viewers perceive motion variations to automatically produce natural motion personalisation accounting for inter-individual variations. In short, our goal is to automate the creation of motion variations to represent given individuals according to their own characteristics, and to produce natural variations that are perceived and identified as such by users. Challenges addressed in this project consist in (i) understanding and quantifying what makes motions of individuals perceptually different, (ii) synthesising motion variations based on these identified relevant perceptual features, according to given individual characteristics, and (iii) leveraging even further the synthesis of motion variations and to explore their creation for interactive large-scale scenarios where both performance and realism are critical.

This work is performed in collaboration with Julien Pettré from Rainbow team.


  • Title: Organized Pedestrian Movement in PublicSpace
  • Duration: 2017 - 2020
  • Coordinator: Université de Haute-Alsace
  • Partners:
    • Universität Koblenz-Landau Hochschule München (DE)
    • Polizei Rheinland-Pfalz Hochschule der Polizei Polizeipräsidien (DE)
    • Deutsches Forschungsinstitut für öffentliche Verwaltung Speyer (DE)
    • virtualcitySYSTEMS GmbH (DE)
    • Technische Universität Kaiserslautern (DE)
    • Université de Haute-Alsace (FR)
    • ONHYS (FR)
    • Centre de Recherche de l’Ecole des Officiers de la Gendarmerie Nationale (FR)
    • Inria (FR)
  • Inria contact: Julien Pettré, team Rainbow
  • Summary: OPMoPS is a three-year Research and Innovation project, where partners representing FCS will cooperate with researchers from academic institutions to develop a decision support tool that can help them in both the preparation and crisis management phase of UPMs. The development of this tool will be conditioned by the needs of the FCS, but also by the results of research into the social behaviour of participants and opponents. The latter, as well as the evaluation of legal and ethical issues related to the proposed technical solutions, constitute an important part of the proposed research. Highly controversial group parades or political demonstrations are seen as a major threat to urban security, since the opposed views of participants and opponents can lead to violence or even terrorist attacks. Due to the movement of urban parades and demonstrations (UPM) through a large part of cities, it is particularly difficult for civil security forces (FCS) to guarantee security in these types of urban events without endangering one of the most important indicators of a free society. The specific technical problems to be faced by the Franco-German consortium are: optimisation methods for planning the routes of UPMs, transport to and from the UPM, planning of FCS people and their location, control of the UPMs using fixed and mobile cameras, as well as simulation methods, including their visualisation, with special emphasis on social behaviour. The methods will be applicable to the preparation and organisation of UPMs, as well as to the management of critical situations of UPMs or to deal with unexpected incidents. This work is performed in collaboration with Julien Pettré from Rainbow team.


Participants: Franck Multon, Armel Crétual, Georges Dumont, Charles Pontonnier, Anthony Sorel.

Hobis is a 42-month ANR collaborative (PRCI) project (2018-2022) entitled Hominin BipedalismS: Exploration of bipedal gaits in Hominins thanks to Specimen-Specific Functional Morphology. HoBis is leaded by the Museum Nationale d'Histoires Naturelles (CNRS), with CNRS/LAAS, and Antwerpen University (Belgium), with a total of 541KE budget (140KE for MimeTIC).

HoBiS (Hominin BipedalismS) is a pluridisciplinary research project, fundamental in nature and centred on palaeoanthropological questions related to habitual bipedalism, one of the most striking features of the human lineage. Recent discoveries (up to 7 My) highlight an unexpected diversity of locomotor anatomies in Hominins that lead palaeoanthropologists to hypothesize that habitual bipedal locomotion took distinct shapes through our phylogenetic history. In early Hominins, this diversity could reveal a high degree of locomotor plasticity which favoured their evolutionary success in the changing environments of the late Miocene and Pliocene. Furthermore, one can hypothesize based on biomechanical theory that differences in gait characteristics, even slight, have impacted the energy balance of hominin species and thus their evolutionary success. However, given the fragmented nature of fossil specimens , previous morphometric and anatomo-functional approaches developed by biologists and palaeoanthropologists, do not allow the assessment of the biomechanical and energetic impacts of such subtle morphological differences, and the manners in which hominin species walked still remains unknown. To tackle this problem, HoBiS proposes as main objective a totally new specimen-specific approach in evolutionary anthropology named Specimen-Specific Functional Morphology: inferring plausible complete locomotor anatomies based on fossil remains, to link these reconstructed anatomies and corresponding musculoskeletal models (MSM) with plausible gaits using simulations. Both sub-objectives will make use of an extensive comparative anatomical and gait biomechanical data bases (challenges). To this end, we will integrate anatomical and functional studies, tools for anatomical modelling, optimization and simulation rooted in informatics, biomechanics, and robotics, to build an in-silico decision-support system (DSS). This DSS will provide biomechanical simulations and energetic estimations of the most plausible bipedal gaits for a variety of hominin species based on available remains, from partial to well-preserved specimens. To achieve this main objective, the project will address the following sub-objectives and challenges

MimeTIC is Leader of WP3 "Biomechanical simulation", aiming at predicting plausible bipedal locomotion based on paleoanthropological heuristics and a given MSM.

Labex CominLabs : Moonlight

Participants: Guillaume Nicolas, Nicolas Bideau.

Moonlight is a 2-year Labex Cominlabs project (2018-2019). Amount: 55kE (including a one-year postdoctoral fellowship). Partners: Granit Team IRISA (, M2S Lab.

The Moonlight project is part of an effort to transpose the tools and methodologies used in motion capture from optoelectronic equipment to inertial unit devices. More specifically, the overall objective of Moonlight project is to design a new embedded system in order to analyze cyclists’ movements in real conditions, i.e. outside of the laboratory. This requires to estimate reliable 3D joint angles, lower limb kinematics and pedals orientation. IMUs are used as an alternative to optoelectronical motion capture but some challenges have to be addressed as regards to sensor-to-segment misalignment and drift. Indeed, a real time accurate orientation of the crank is necessary to get limb position. To achieve this goal, data fusion algorithms between IMU data and pedal orientation are implemented. A wireless sensor network with accurate time synchronization mechanism is needed to process data fusion from all sensor’s nodes on a tablet. Finally, the system deals with size, energy consumption and ease-to-use constraints.

9.3.3 National scientific collaborations


Participants: Franck Multon.

The Cavaletic collaborative project is leaded by University Bretagne Sud and also involves University Rennes2 (CREAD Lab.). It has been funded by the National IFCE (Institut Français du Cheval et de l'Equitation) in order to develop and evaluate technological assistance in horse riding learning, thanks to a user-centered approach. MimeTIC is involved in measuring expert and non-expert horse riders' motions in standardized situations in order to develop metrics to measure riders' performance. It will be used to develop a technological system embedded on users to evaluate their performance and provide them with real-time feedback to correct potential errors.

The project funded by IFCE ended in 2018 but we got a 30K€ budget from SATT Ouest Valorisation in order to finish the development of the technological prototype, and to evaluate the possibility to patent the process, and transfer it to private companies. This project is in collaboration with LEGO lab. in University Bretagne Sud, and CAIRN Inria team.

French Federation of Tennis

Participants: Richard Kulpa, Benoit Bideau, Pierre Touzard.

An exclusive contract has been signed between the M2S laboratory and the French Federation of Tennis for three years. The goal is to perform biomechanical analyses of 3D tennis serves on a population of 40 players of the Pôle France. The objective is to determine the link between injuries and biomechanical constraints on joints and muscles depending on the age and gender of the players. At the end, the goal is to evaluate their training load.

9.3.4 Chaire Safran-Saint-Cyr "the enhanced soldier in the digital battlefield"

Participants: Charles Pontonnier, Pierre Puchaud.

The chaire has the goal to answer to scientific questions accompanying the evolution of the technologies equipping the soldiers in mission. In this scheme, the MimeTIC team is involved in generic and specific musculoskeletal models for the prototyping of load carriage assistive devices (exoskeletons). Chair sponsored by SAFRAN group, led by Yvon Erhel (Professor, Ecoles de Sainr-Cyr Coëtquidan).


Participants: Anne-Hélène Olivier, Armel Crétual , Anthony Sorel.

The AUTOMA-PIED project is driven by IFSTTAR. Using a set-up in virtual reality, the first objective of the project aims at comparing pedestrian behaviour (young and older adults) when interacting with traditional or autonomous vehicles in a street crossing scenario. The second objective is to identify postural cues that can predict whether or not the pedestrian is about to cross the street.

9.3.6 Défi Avatar

Participants: Jean Basset, Diane Dewez, Rebecca Fribourg, Ludovic Hoyet, Franck Multon.

This project aims at design avatars (i.e., the user’s representation in virtual environments) that are better embodied, more interactive and more social, through improving all the pipeline related to avatars, from acquisition and simulation, to designing novel interaction paradigms and multi-sensory feedback. It involves 6 Inria teams (GraphDeco, Hybrid, Loki, MimeTIC, Morpheo, Potioc), Prof. Mel Slater (Uni. Barcelona), and 2 industrial partners (InterDigitak and Faurecia).


9.4 Regional initiatives

9.4.1 Kimea Cloud

Participants: Franck Multon, Adnane Boukhayma, Shubhendu Jena.

This project is funded by the Bretagne council and BPI France to develeop a new posture assessment system for egonomics analysis. The project is leaded by the Moovency start-up, in collaboration with SME Quortex (for efficient cloud computing) and Inria (for RGB-based 3D pose reconstruction). The key idea is to design a method capable of measuring the 3D joint angles of a character filmed with a simple smartphone in an industry, using an efficient cloud computing architecture and innovative computer vision methods. The project started in November 2020 for 18 months.

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees

  • Anne-Hélène Olivier was Co-Organizer for the International workshops: Virtual Humans and Crowds in Immersive Environments (VHCIE 2020) co-located with IEEE VR Conference, Modeling and Animating Realistic Crowds and Humans (MARCH 2020), colocated with AIVR Conference, and Walker Behaviour : from Analysis to Aplications.

10.1.2 Scientific events: selection

Chair of conference program committees

  • Ludovic Hoyet was Co-Program Chair for the 2020 Workshop on Affects, Compagnons Artificiels et Interactions (WACAI, France)
  • Anne-Hélène Olivier was Co-Program Chair for the International Conference EuroVR 2020

Member of the conference program committees

  • Ludovic Hoyet, IEEE VR Journal Paper Track, ACM Motion Interactions and Games, ACM Symposium on Applied Perception
  • Franck Multon, ACM Motion Interaction in Games
  • Anne-Hélène Olivier: ACM Symposium on Applied Perception, WACAI Workshop


  • Ludovic Hoyet, Eurographics, IEEE Virtual Reality, Pacifics Graphics, Modeling and Animating Realistic Crowds and Humans - AIVR Workshops, Virtual Humans and Crowds in Immersive Environments - IEEE VR Workshops,
  • Franck Multon, IEEE ICRA, ACM Motion Interaction in Games
  • Charles Pontonnier, IEEE VR
  • Anne-Hélène Olivier, IEEE VR, Pacifics Graphics
  • Adnane Boukhayma, IEEE CVPR, IEEE ICCV, ECCV

10.1.3 Journal

Member of the editorial boards

  • Franck Multon is associated editor of the journal Computer Animation and Virtual Worlds (Wiley)
  • Franck Multon is associated editor of the Journal Presence (MIT Press)

Reviewer - reviewing activities

  • Ludovic Hoyet, ACM Transactions on Graphics, IEEE Transactions on Visualization and Computer Graphics
  • Franck Multon, International Journal of Industrial Ergonomics, Sensors, Computer Graphics Forum, Virtual Reality, Computer and Graphics, Transactions on Information Forensics and Security, Multimodal Technologies and Interaction, Applied Sciences, Frontiers in Computer Science - section Digital Public Health, IEEE Journal of Biomedical and Health Informatics, ACM Transaction on Graphics, IEEE Robotics and Automation
  • Charles Pontonnier, Applied Ergonomics, Applied Sciences,
  • Anne-Hélène Olivier, Computer and Graphics, Motor Control

10.1.4 Invited talks

  • Franck Multon was invited speaker in the Laval Virtual Days on Sports, 17/09/2020
  • Charles Pontonnier was invited speaker at CC seminar, G-SCOP lab (Grenoble), 28/04/2020
  • Adnane Boukhayma was invited speaker at the CRDH Workshop at the IEEE AIVR Conference, 16/12/2020

10.1.5 Scientific expertise

  • Franck Multon was scientific experts for the bi-national evaluation of researchers from UGE (former IFSTTAR) in Lyon
  • Franck Multon was international scientific expert for the Fonds de recherche du Québec - Santé (FRQS Canada) for the Fondation canadienne pour l’innovation (FCI) call
  • Franck Multon was scientific expert for the Region Normandie/ANR joint call

10.1.6 Research administration

  • Ludovic Hoyet is the coordinator of the Inria Research Challenge Avatar
  • Franck Multon is responsible for the coordination of national Inria actions in Sports
  • Franck Multon is the scientific representative of Inria in Sciences2024 group and scientific Committee
  • Franck Multon is the scientific representative of Inria in the EUR Digisport steering committe and scientific committee
  • Franck Multon is member of the Brittany commission of deontology
  • Armel Crétual is the elected head of the Sports Sciences departement (STAPS) in University Rennes2
  • Benoit Bideau is the head of the M2S Laboratory
  • Benoit Bideau is vice-president of University Rennes2, in charge of the valorisation

10.2 Teaching - Supervision - Juries


  • Master : Franck Multon, co-leader of the IEAP Master (1 and 2) "Ingénierie et Ergonomie de l'Activité Physique", STAPS, University Rennes2, France
  • Master : Franck Multon, "Santé et Performance au Travail : étude de cas", leader of the module, 30H, Master 1 M2S, University Rennes2, France
  • Master : Franck Multon, "Analyse Biomécanique de la Performance Motrice", leader of the module, 30H, Master 1 M2S, University Rennes2, France
  • Master: Charles Pontonnier, leader of the first year of master "Ingénierie des systèmes complexes", mechatronics, Ecole normale supérieur de Rennes, France
  • Master: Charles Pontonnier, Responsible of the internships of students (L3 and M1 "Ingénierie des systèmes complexes"), 15H, Ecole Normale Supérieure de Rennes, France
  • Master: Charles Pontonnier, "Numerical simulation of polyarticulated systems", leader of the module, 22H, M1 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Master: Charles Pontonnier, Research projects, 20H, Ecole Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Responsible of the second year of the master Engineering of complex systems, École Normale Supérieure de Rennes and Rennes 1 University, France
  • Master : Georges Dumont, Mechanical simulation in Virtual reality, 36H, Master Engineering of complex systems and Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Mechanics of deformable systems, 40H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, oral preparation to agregation competitive exam, 20H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Vibrations in Mechanics, 10H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Finite Element method, 12H, Master, École Normale Supérieure de Rennes, France
  • Master : Ludovic Hoyet, Computer Graphics, 12h, Ecole Normale Supérieure de Rennes, France
  • Master : Ludovic Hoyet, Motion Analysis and Gesture Recognition, 12h, INSA Rennes, France
  • Master : Ludovic Hoyet, Réalité Virtuelle pour l'Analyse Ergonomique, Master Ingénierie et Ergonomie des Activités Physique, 21h, University Rennes 2, France
  • Master : Anne-Hélène Olivier, co-leader of the APPCM Master (1 and 2) "Activités Physiques et Pathologies Chroniques et Motrices", STAPS, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Biostatstiques", 21H, Master 2 APPCM/IEAP, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Evaluation fonctionnelle des pathologies motrices", 3H Master 2 APPCM, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Maladie neurodégénératives : aspects biomécaniques", 2H Master 1 APPCM, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Biostatstiques", 7H, Master 1 EOPS, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Méthodologie", 10H, Master 1 IEAP/APPCM, University Rennes2, France
  • Master : Anne-Hélène Olivier, "Contrôle moteur : Boucle perceptivo-motrice", 3H, Master 1IEAP, Université Rennes 2, France
  • Master: Fabrice Lamarche, "Compilation pour l'image numérique", 29h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Synthèse d'images", 12h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Synthèse d'images avancée", 28h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Modélisation Animation Rendu", 36h, Master 2, ISTIC, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Jeux vidéo", 26h, Master 2, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Motion for Animation and Robotics", 9h, Master 2 SIF, ISTIC, University of Rennes 1, France.
  • Master : Armel Crétual, "Méthodologie", leader of the module, 20H, Master 1 M2S, University Rennes2, France
  • Master : Armel Crétual, "Biostatstiques", leader of the module, 30H, Master 2 M2S, University Rennes2, France
  • Master : Richard Kulpa, "Boucle analyse-modélisation-simulation du mouvement", 27h, leader of the module, Master 2, Université Rennes 2, France
  • Master : Richard Kulpa, "Méthodes numériques d'analyse du geste", 27h, leader of the module, Master 2, Université Rennes 2, France
  • Master : Richard Kulpa, "Cinématique inverse", 3h, leader of the module, Master 2, Université Rennes 2, France
  • Master: Marc Christie, "Multimedia Mobile", Master 2, leader of the module, 32h, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Projet Industriel Transverse", Master 2, 32h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Modelistion Animation Rendu", Master 2, 16h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Advanced Computer Graphics", Master 1, 10h, leader of the module, Computer Science, ENS, France
  • Licence : Franck Multon, "Ergonomie du poste de travail", Licence STAPS L2 & L3, University Rennes2, France
  • Licence: Charles Pontonnier, "Lagrangian Mechanics" , leader of the module, 22H, M2 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Licence: Charles Pontonnier, "Serial Robotics", leader of the module, 24H, L3 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Licence : Anne-Hélène Olivier, "Analyse cinématique du mouvement", 100H , Licence 1, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Anatomie fonctionnelle", 7H , Licence 1, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Effort et efficience", 12H , Licence 2, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Locomotion et handicap", 12H , Licence 3, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique spécifique aux APA", 8H , Licence 3, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique du viellissement", 12H , Licence 3, University Rennes 2, France
  • Licence: Fabrice Lamarche, "Initiation à l'algorithmique et à la programmation", 56h, License 3, ESIR, University of Rennes 1, France
  • License: Fabrice Lamarche, "Programmation en C++", 46h, License 3, ESIR, University of Rennes 1, France
  • Licence: Fabrice Lamarche, "IMA", 24h, License 3, ENS Rennes, ISTIC, University of Rennes 1, France
  • Licence : Armel Crétual, "Analyse cinématique du mouvement", 100H, Licence 1, University Rennes 2, France
  • Licence : Richard Kulpa, "Biomécanique (dynamique en translation et rotation)", 48h, Licence 2, Université Rennes 2, France
  • Licence : Richard Kulpa, "Méthodes numériques d'analyse du geste", 48h, Licence 3, Université Rennes 2, France
  • Licence : Richard Kulpa, "Statistiques et informatique", 15h, Licence 3, Université Rennes 2, France


  • PhD (beginning September 2017, defended December 9th 2020): Pierre Puchaud, Développement d’un modèle musculo-squelettique générique du soldat en vue du support de son activité physique, Ecole normale supérieure, Charles Pontonnier & Nicolas Bideau & Georges Dumont
  • PhD (beginning September 2017, defended December 4th 2020): Simon Hilt, Haptique Biofidèle pour l’Interaction en réalité virtuelle, Ecole normale supérieure, Georges Dumont, Charles Pontonnier
  • PhD (beginning Sept 2017, defended November 4th 2020) Rebecca Fribourg, Enhancing Avatars in Virtual Reality through Control, Interactions and Feedback, Ferran Argelaguet (Hybrid team), Ludovic Hoyet, Anatole Lécuyer (Hybrid team).
  • PhD (beginning November 2017, defended December 14th 2020) Florian Berton, Design of a virtual reality platform for studying immersion and behaviours in aggressive crowds, Nov. 2017, Ludovic Hoyet, Anne-Hélène Olivier, Julien Pettré (Rainbow team).
  • PhD in progress (beginning January 2019): Nils Hareng, simulation of plausible bipedal locomotion of human and non-human primate, University Rennes2, Franck Multon, & Bruno Watier (CNRS LAAS in Toulouse)
  • PhD in progress (beginning January 2019): Nicolas Olivier, Adaptive Avatar Customization for Immersive Experience, Cifre InterDigital, Franck Multon, Ferran Arguelaget (Hybrid team), Quentin Avril (InterDigital), Fabien Danieau (InterDigital)
  • PhD in progress (beginning September 2017): Lyse Leclerc, Intérêts dans les activités physiques du rétablissement de la fonction inertielle des membres supérieurs en cas d’amputation ou d’atrophie, Armel Crétual, Diane Haering
  • PhD in progress (beginning September 2018) : Jean Basset, Learning Morphologically Plausible Pose Transfer, Inria, Edmond Boyer (Morpheo Inria Grenoble), Franck Multon
  • PhD in progress (beginning September 2018): Olfa Haj Mamhoud, Monitoring de l’efficience gestuelle d’opérateurs sur postes de travail, University Rennes 2, Franck Multon, Georges Dumont, Charles Pontonnier
  • PhD in progress (beginning December 2020): Mohamed Younes, Learning and simulating strategies in sports for VR training, University Rennes1, Franck Multon, Richard Kulpa, Ewa Kijak, Simon Malinowski
  • PhD in progress (beginning September 2019) : Claire Livet, Dynamique contrainte pour l'analyse musculo-squelettique en temps rapide : vers des méthodes alternatives d'estimation des forces musculaires mises en jeu dans le mouvement humain, Ecole normale supérieure, Georges Dumont, Charles Pontonnier
  • PhD in progress (beginning september 2019) : Louise Demestre, simulation MUsculo-squelettique et Structure Elastique pour le Sport (MUSES), Ecole normale supérieure, Georges Dumont, Charles Pontonnier, Nicolas Bideau, Guillaume Nicolas
  • PhD in progress: Diane Dewez, Avatar-Based Interaction in Virtual Reality, Oct. 2018, Ferran Argelaguet (Hybrid team), Ludovic Hoyet, Anatole Lécuyer (Hybrid team).
  • PhD in progress (beginning Oct 2018): Benjamin Niay, A framework for synthezing personalised human motions from motion capture data and perceptual information, Ludovic Hoyet, Anne-Hélène Olivier, Julien Pettré (Rainbow team).
  • PhD in progress (beginning Sept 2019): Alberto Jovane, Modélisation de mouvements réactifs et comportements non verbaux pour la création d’acteurs digitaux pour la réalité virtuelle, Marc Christie, Ludovic Hoyet, Claudio Pacchierotti (Rainbow team), Julien Pettré (Rainbow team).
  • PhD in progress (beginning Nov 2019): Adèle Colas, Modélisation de comportements collectifs réactifs et expressifs pour la réalité virtuelle, Ludovic Hoyet, Anne-Hélène Olivier, Claudio Pacchierotti (Rainbow team), Julien Pettré (Rainbow team).
  • PhD in progress (beginning Oct 2018) Hugo Brument, Toward user-adapted interactions techniques based on human locomotion laws for navigating in virtual environments, Anne-Hélène Olivier, Ferran Argelaguet (Hybrid team), Maud Marchal (Rainbow team).
  • PhD in progress: Lyse Leclercq, Intérêts dans les activités physiques du rétablissement de la fonction inertielle des membres supérieurs en cas d’amputation ou d’atrophie, Sep. 2017, Armel Crétual, Diane Haering
  • PhD in progress: Florence Gaillard, Validation d’un protocole d’analyse 3D du mouvement des membres supérieurs en situation écologique chez les enfants ayant une Hémiplégie Cérébrale Infantile, Armel Crétual, Isabelle Bonan
  • PhD in progress: Karim Jamal, Effets des stimulations sensorielles par vibration des muscles du cou sur les perturbations posturales secondaires aux troubles de la représentation spatiale après un accident vasculaire cérébral, Sep 2016, Isabelle Bonan, Armel Crétual
  • PhD in progress (beginning Sep 2018): Carole Puil, Impact d’une stimulation plantaire orthétique sur la posture d’individus sains et posturalement déficients au cours de la station debout, et lors de la marche, Armel Crétual, Anne-Hélène Olivier
  • PhD in progress: Théo Perrin, Evaluation de la perception de l'effort en réalité virtuelle pour l'entraînement sportif, Sep. 2017, Richard Kulpa, Benoit Bideau
  • PhD in progress: Annabelle Limballe, Anticipation dans les sports de combat : la réalité virtuelle comme solution innovante d’entraînement, Sep. 2019, Richard Kulpa & Benoit Bideau
  • PhD in progress: Alexandre Vu, Evaluation de l’influence des feedbacks sur la capacité d’apprentissage dans le cadre d’interactions complexes entre joueurs et influence de ces feedbacks en fonction de l’activité sportive , Sep. 2019, Richard Kulpa & Benoit Bideau
  • PhD in Progress (beginning December 2020): Qian Li, Synthèse neurale de nouvelles vues de personnes dynamiques dans des videos monoculaires, University Rennes1, Franck Multon, Adnane Boukhayma
  • PhD in progress (beginning September 2020): Pauline Morin, Adaptation des méthodes prédiction des efforts d’interaction pour l’analyse biomécanique du mouvement en milieu écologique, Ecole normale supérieure de Rennes, direction Georges Dumont et Charles Pontonnier
  • PhD in progress (beginning June 2020): Lucas Mourot, Learning-Based Human Character Animation Synthesis for Content Production, Pierre Hellier (InterDigital), Ludovic Hoyet, Franck Multon, François Le Clerc (InterDigital).
  • PhD in progress (beginning September 2020): Agathe Bilhaut, Stratégies perceptivo-motrices durant la locomotion des patients douloureux chroniques : nouvelles méthodes d’analyse et de suivi, Armel Crétual, Anne-Hélène Olivier, Mathieu Ménard (Institut Ostéopathie Rennes, M2S)
  • PhD in progress (beginning October 2020): Emilie Leblong, Prise en compte des interactions sociales dans un simulateur de conduite de fauteuil roulant électrique en réalité virtuelle : favoriser l’apprentissage pour une mobilité inclusive, Anne-Hélène Olivier, Marie Babel (Rainbow team)
  • PhD in progress (beginning October 2020): Raphaël Dang-Nhu, Learning and evaluating 3D human motion synthesis, Anne-Hélène Olivier, Stefanie Wuhrer (Morpheo team)
  • PhD in progress (beginning November 2020): Thomas Chatagnon, Micro-to-macro energy-based interaction models for dense crowds behavioral simulations, Ecole normale supérieure de Rennes, Ludovic Hoyet, Anne-Hélène Olivier,Charles Pontonnier, Julien Pettré (Rainbow team).
  • PhD in progress (beginning November 2020): Vicenzo Abichequer-Sangalli, Humains virtuels expressifs et réactifs pour la réalité virtuelle, Marc Christie, Ludovic Hoyet, Carol O'Sullivan (TCD, Ireland), Julien Pettré (Rainbow team).
  • PhD in progress (beginning November 2020): Tairan Yin, Création de scènes peuplées dynamiques pour la réalité virtuelle, Marc Christie, Marie-Paule Cani (LIX), Ludovic Hoyet, Julien Pettré (Rainbow team).
  • PhD in progress (beginning October 2020): Qian Li, Neural novel view synthesis of dynamic people from monocular videos, Adnane Boukhayma, Franck Multon.


  • PhD defense: Tel Aviv University, Kfir Aberman, " Learning the Structure of Motion", October 2020. Franck Multon, Reviewer
  • PhD defense: Toulouse University, Thibault Blanc-Beyne, "Estimation de posture 3D à partir de données imprécises et incomplètes : application à l’analyse d’activité d’opérateurs humains dans un centre de tri", Décember 2020. Franck Multon, Reviewer
  • PhD defense: Lille University, Nadia Hosni, "From functional PCA to Convolutional Deep AE on Kendall’s Shape Trajectories for 3D Gait Analysis and Recognition", november 2020, Franck Multon, Examiner

10.2.1 Internal or external Inria responsibilities

Franck Multon coordinated national actions of Inria in Sports.

11 Scientific production

11.1 Major publications

  • 1 articleJeanJ. Basset, StefanieS. Wuhrer, EdmondE. Boyer and FranckF. Multon. Contact preserving shape transfer: Retargeting motion from one shape to anotherComputers and Graphics892020, 11-23
  • 2 articleLudovicL. Burg, ChristopheC. Lino and MarcM. Christie. Real-time Anticipation of Occlusions for Automated Camera Control in Toric SpaceComputer Graphics Forumxx2July 2020, 1 - 11
  • 3 articleCharlesC. Faure, AnnabelleA. Limballe, BenoitB. Bideau and RichardR. Kulpa. Virtual reality to assess and train team ball sports performance: A scoping reviewJournal of Sports Sciences382January 2020, 192-205
  • 4 articleRebeccaR. Fribourg, FerranF. Argelaguet Sanz, AnatoleA. Lécuyer and LudovicL. Hoyet. Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of ViewIEEE Transactions on Visualization and Computer Graphics265May 2020, 2062-2072
  • 5 articleDianeD. Haering, CharlesC. Pontonnier, NicolasN. Bideau, GuillaumeG. Nicolas and GeorgesG. Dumont. Using Torque-Angle and Torque- Velocity Models to Characterize Elbow Mechanical Function: Modeling and Applied AspectsJournal of Biomechanical Engineering1418May 2019, 084501
  • 6 article HongdaH. Jiang, BinB. Wang, XiX. Wang, MarcM. Christie and BaoquanB. Chen. Example-driven virtual cinematography by learning camera behaviors ACM Transactions on Graphics 39 4 July 2020
  • 7 articleAntoineA. Muller, CharlesC. Pontonnier and GeorgesG. Dumont. Motion-based prediction of hands and feet contact efforts during asymmetric handling tasksIEEE Transactions on Biomedical Engineering2019, 1-11
  • 8 articleAntoineA. Muller, CharlesC. Pontonnier, PierreP. Puchaud and GeorgesG. Dumont. CusToM: a Matlab toolbox for musculoskeletal simulationJournal of Open Source Software433January 2019, 1-3
  • 9 articleAnthonyA. Sorel, PierreP. Plantard, NicolasN. Bideau and CharlesC. Pontonnier. Studying fencing lunge accuracy and response time in uncertain conditions with an innovative simulatorPLoS ONE1472019, e0218959
  • 10 articleKatjaK. Zibrek, BenjaminB. Niay, Anne-HélèneA.-H. Olivier, LudovicL. Hoyet, JulienJ. Pettré and RachelR. Mcdonnell. The effect of gender and attractiveness of motion on proximity in virtual realityACM Transactions on Applied Perception174November 2020, 1-15

11.2 Publications of the year

International journals

National journals

International peer-reviewed conferences

Conferences without proceedings

  • 44 inproceedings OlfaO. Haj Mahmoud, CharlesC. Pontonnier, GeorgesG. Dumont, StéphaneS. Poli and FranckF. Multon. Posture Assessment and Subjective Scale Agreement in Picking Tasks with Low Masses AHFE - 11th Applied Human Factors and Ergonomics Conference San Diego, United States July 2020
  • 45 inproceedings PierreP. Puchaud, SimonS. Kirchhofer, GeorgesG. Dumont, NicolasN. Bideau and CharlesC. Pontonnier. Dimension Reduction of Anthropometric Measurements with Support Vector Machine for Regression: Application to a French Military Personnel Database AHFE - 11th Applied Human Factors and Ergonomics Conference San Diego, United States July 2020
  • 46 inproceedings CaroleC. Puil, Anne-HélèneA.-H. Olivier and ArmelA. Crétual. Clinical evaluation of plantar exteroceptive inefficiency: normalisation of Plantar quotient ENPODHE Malaga, Spain 2020

Doctoral dissertations and habilitation theses

  • 47 thesis SimonS. Hilt. Haptics Virtual Reality Biofidelity Ergonomics École normale supérieure de Rennes December 2020
  • 48 thesis PierreP. Puchaud. Generic and specific musculoskeletal modeling for the support of the soldier's physical activity École normale supérieure de Rennes December 2020

Reports & preprints

  • 49 misc AlexandreA. Bruckert, MarcM. Christie and OlivierO. Le Meur. Where to look at the movies : Analyzing visual attention to understand movie editing February 2021

Other scientific publications