EN FR
EN FR
2022
Activity report
Project-Team
MIMETIC
RNSR: 201120991Y
In partnership with:
Université Rennes 1, Université Haute Bretagne (Rennes 2), École normale supérieure de Rennes
Team name:
Analysis-Synthesis Approach for Virtual Human Simulation
In collaboration with:
Institut de recherche en informatique et systèmes aléatoires (IRISA)
Domain
Perception, Cognition and Interaction
Theme
Interaction and visualization
Creation of the Project-Team: 2014 January 01

Keywords

Computer Science and Digital Science

  • A5.1.3. Haptic interfaces
  • A5.1.5. Body-based interfaces
  • A5.1.9. User and perceptual studies
  • A5.4.2. Activity recognition
  • A5.4.5. Object tracking and motion analysis
  • A5.4.8. Motion capture
  • A5.5.4. Animation
  • A5.6. Virtual reality, augmented reality
  • A5.6.1. Virtual reality
  • A5.6.3. Avatar simulation and embodiment
  • A5.6.4. Multisensory feedback and interfaces
  • A5.10.3. Planning
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.11.1. Human activity analysis and recognition
  • A6. Modeling, simulation and control

Other Research Topics and Application Domains

  • B1.2.2. Cognitive science
  • B2.5. Handicap and personal assistances
  • B2.8. Sports, performance, motor skills
  • B5.1. Factory of the future
  • B5.8. Learning and training
  • B9.2.2. Cinema, Television
  • B9.2.3. Video games
  • B9.4. Sports

1 Team members, visitors, external collaborators

Research Scientists

  • Franck Multon [Team leader, INRIA, Senior Researcher, HDR]
  • Adnane Boukhayma [INRIA, Researcher]
  • Ludovic Hoyet [Inria, Researcher, until Jun 2022, HDR]

Faculty Members

  • Benoit Bardy [UNIV MONTPELLIER, Associate Professor, HDR]
  • Nicolas Bideau [UNIV RENNES II, Associate Professor]
  • Benoit Bideau [UNIV RENNES II, Professor, HDR]
  • Marc Christie [UNIV RENNES I, Associate Professor, until Jun 2022]
  • Armel Cretual [UNIV RENNES II, Associate Professor, HDR]
  • Georges Dumont [ENS RENNES, Professor, HDR]
  • Diane Haering [UNIV RENNES II, Associate Professor]
  • Richard Kulpa [UNIV RENNES II, Associate Professor, HDR]
  • Fabrice Lamarche [UNIV RENNES I, Associate Professor]
  • Guillaume Nicolas [UNIV RENNES II, Associate Professor]
  • Anne-Hélène Olivier [UNIV RENNES II, Associate Professor, until Jun 2022, HDR]
  • Charles Pontonnier [ENS RENNES, Associate Professor, HDR]

Post-Doctoral Fellows

  • Joao Cova Regateiro [UNIV RENNES I]
  • Katja Zibrek [INRIA, until Jun 2022]

PhD Students

  • Antoine Bouvet [ENS RENNES, from Sep 2022]
  • Ludovic Burg [UNIV RENNES I, until Sep 2022]
  • Rebecca Crolan [ENS RENNES]
  • Erwan Delhaye [ENS RENNES]
  • Louise Demestre [ENS RENNES, until Nov 2022]
  • Nils Hareng [Université Rennes II, until Mar 2022]
  • Shubhendu Jena [Inria, from Jun 2022]
  • Qian Li [INRIA]
  • Annabelle Limballe [ENS Rennes]
  • Claire Livet [ENS Rennes, until Jul 2022]
  • William Mocaer [INSA Rennes, Co-supervised with Intuidoc team]
  • Pauline Morin [ENS RENNES]
  • Benjamin Niay [INRIA, until Jun 2022]
  • Nicolas Olivier [InterDigital, CIFRE, until Mar 2022]
  • Hasnaa Ouadoudi Belabzioui [MOOVENCY, CIFRE]
  • Amine Ouasfi [Inria, from Dec 2022]
  • Carole Puil [IFPEK, until Jun 2022]
  • Etienne Ricard [INRS - VANDOEUVRE- LES- NANCY, from Oct 2022]
  • Sony Saint-Auret [Inria, from Oct 2022]
  • Alexandre Vu [UNIV RENNES II]
  • Mohamed Younes [INRIA]

Technical Staff

  • Benjamin Gamblin [UNIV RENNES II, Engineer, from Nov 2022]
  • Ronan Gaugne [UNIV RENNES I]
  • Laurent Guillo [CNRS]
  • Shubhendu Jena [INRIA, Engineer, until May 2022]
  • Tangui Marchand Guerniou [Inria, Engineer, from Oct 2022]
  • Valentin Ramel [inria, Engineer, from Oct 2022]
  • Salome Ribault [INRIA, Engineer]
  • Anthony Sorel [UNIV RENNES II, Engineer]

Administrative Assistant

  • Nathalie Denis [INRIA]

2 Overall objectives

2.1 Presentation

MimeTIC is a multidisciplinary team whose aim is to better understand and model human activity in order to simulate realistic autonomous virtual humans: realistic behaviors, realistic motions and realistic interactions with other characters and users. It leads to modeling the complexity of a human body, as well as of his environment where he can pick-up information and he can act on it. A specific focus is dedicated to human physical activity and sports as it raises the highest constraints and complexity when addressing these problems. Thus, MimeTIC is composed of experts in computer science whose research interests are computer animation, behavioral simulation, motion simulation, crowds and interaction between real and virtual humans. MimeTIC also includes experts in sports science, motion analysis, motion sensing, biomechanics and motion control. Hence, the scientific foundations of MimeTIC are motion sciences (biomechanics, motion control, perception-action coupling, motion analysis), computational geometry (modeling of the 3D environment, motion planning, path planning) and design of protocols in immersive environments (use of virtual reality facilities to analyze human activity).

Thanks to these skills, we wish to reach the following objectives: to make virtual humans behave, move and interact in a natural manner in order to increase immersion and improve knowledge on human motion control. In real situations (see Figure 1), people have to deal with their physiological, biomechanical and neurophysiological capabilities in order to reach a complex goal. Hence MimeTIC addresses the problem of modeling the anatomical, biomechanical and physiological properties of human beings. Moreover these characters have to deal with their environment. Firstly they have to perceive this environment and pick-up relevant information. Thus, MimeTIC focuses on the problem of modeling the environment including its geometry and associated semantic information. Second, they have to act on this environment to reach their goals. It leads to cognitive processes, motion planning, joint coordination and force production in order to act on this environment.

Figure 1
Figure 1: Main objective of MimeTIC: to better understand human activity in order to improve virtual human simulations. It involves modeling the complexity of human bodies, as well as of environments where to pick-up information and act upon.

In order to reach the above objectives, MimeTIC has to address three main challenges:

  • deal with the intrinsic complexity of human beings, especially when addressing the problem of interactions between people for which it is impossible to predict and model all the possible states of the system,
  • make the different components of human activity control (such as the biomechanical and physical, the reactive, cognitive, rational and social layers) interact while each of them is modeled with completely different states and time sampling,
  • and measure human activity while balancing between ecological and controllable protocols, and to be able to extract relevant information in wide databases of information.

As opposed to many classical approaches in computer simulation, which mostly propose simulation without trying to understand how real people act, the team promotes a coupling between human activity analysis and synthesis, as shown in Figure 2.

Figure 2
Figure 2: Research path of MimeTIC: coupling analysis and synthesis of human activity enables us to create more realistic autonomous characters and to evaluate assumptions about human motion control.

In this research path, improving knowledge on human activity enables us to highlight fundamental assumptions about natural control of human activities. These contributions can be promoted in e.g. biomechanics, motion sciences, neurosciences. According to these assumptions we propose new algorithms for controlling autonomous virtual humans. The virtual humans can perceive their environment and decide of the most natural action to reach a given goal. This work is promoted in computer animation, virtual reality and has some applications in robotics through collaborations. Once autonomous virtual humans have the ability to act as real humans would in the same situation, it is possible to make them interact with others, i.e., autonomous characters (for crowds or group simulations) as well as real users. The key idea here is to analyze to what extent the assumptions proposed at the first stage lead to natural interactions with real users. This process enables the validation of both our assumptions and our models.

Among all the problems and challenges described above, MimeTIC focuses on the following domains of research:

  • motion sensing which is a key issue to extract information from raw motion capture systems and thus to propose assumptions on how people control their activity,
  • human activity & virtual reality, which is explored through sports application in MimeTIC. This domain enables the design of new methods for analyzing the perception-action coupling in human activity, and to validate whether the autonomous characters lead to natural interactions with users,
  • interactions in small and large groups of individuals, to understand and model interactions with lot of individual variability such as in crowds,
  • virtual storytelling which enables us to design and simulate complex scenarios involving several humans who have to satisfy numerous complex constraints (such as adapting to the real-time environment in order to play an imposed scenario), and to design the coupling with the camera scenario to provide the user with a real cinematographic experience,
  • biomechanics which is essential to offer autonomous virtual humans who can react to physical constraints in order to reach high-level goals, such as maintaining balance in dynamic situations or selecting a natural motor behavior among the whole theoretical solution space for a given task,
  • autonomous characters which is a transversal domain that can reuse the results of all the other domains to make these heterogeneous assumptions and models provide the character with natural behaviors and autonomy.

3 Research program

3.1 Biomechanics and Motion Control

Human motion control is a highly complex phenomenon that involves several layered systems, as shown in Figure 3. Each layer of this controller is responsible for dealing with perceptual stimuli in order to decide the actions that should be applied to the human body and his environment. Due to the intrinsic complexity of the information (internal representation of the body and mental state, external representation of the environment) used to perform this task, it is almost impossible to model all the possible states of the system. Even for simple problems, there generally exists an infinity of solutions. For example, from the biomechanical point of view, there are much more actuators (i.e. muscles) than degrees of freedom leading to an infinity of muscle activation patterns for a unique joint rotation. From the reactive point of view there exists an infinity of paths to avoid a given obstacle in navigation tasks. At each layer, the key problem is to understand how people select one solution among these infinite state spaces. Several scientific domains have addressed this problem with specific points of view, such as physiology, biomechanics, neurosciences and psychology.

Figure 3
Figure 3: Layers of the motion control natural system in humans.

In biomechanics and physiology, researchers have proposed hypotheses based on accurate joint modeling (to identify the real anatomical rotational axes), energy minimization, force and torques minimization, comfort maximization (i.e. avoiding joint limits), and physiological limitations in muscle force production. All these constraints have been used in optimal controllers to simulate natural motions. The main problem is thus to define how these constraints are composed altogether such as searching the weights used to linearly combine these criteria in order to generate a natural motion. Musculoskeletal models are stereotyped examples for which there exists an infinity of muscle activation patterns, especially when dealing with antagonist muscles. An unresolved problem is to define how to use the above criteria to retrieve the actual activation patterns, while optimization approaches still leads to unrealistic ones. It is still an open problem that will require multidisciplinary skills including computer simulation, constraint solving, biomechanics, optimal control, physiology and neuroscience.

In neuroscience, researchers have proposed other theories, such as coordination patterns between joints driven by simplifications of the variables used to control the motion. The key idea is to assume that instead of controlling all the degrees of freedom, people control higher level variables which correspond to combinations of joint angles. In walking, data reduction techniques such as Principal Component Analysis have shown that lower-limb joint angles are generally projected on a unique plane whose angle in the state space is associated with energy expenditure. Although knowledge exists for specific motions, such as locomotion or grasping, this type of approach is still difficult to generalize. The key problem is that many variables are coupled and it is very difficult to objectively study the behavior of a unique variable in various motor tasks. Computer simulation is a promising method to evaluate such type of assumptions as it enables to accurately control all the variables and to check if it leads to natural movements.

Neuroscience also addresses the problem of coupling perception and action by providing control laws based on visual cues (or any other senses), such as determining how the optical flow is used to control direction in navigation tasks, while dealing with collision avoidance or interception. Coupling of the control variables is enhanced in this case as the state of the body is enriched by the large amount of external information that the subject can use. Virtual environments inhabited with autonomous characters whose behavior is driven by motion control assumptions is a promising approach to solve this problem. For example, an interesting problem in this field is to navigate in an environment inhabited with other people. Typically, avoiding static obstacles along with other people moving inside that environment is a combinatory problem that strongly relies on the coupling between perception and action.

One of the main objectives of MimeTIC is to enhance knowledge on human motion control by developing innovative experiments based on computer simulation and immersive environments. To this end, designing experimental protocols is a key point and some of the researchers in MimeTIC have developed this skill in biomechanics and perception-action coupling. Associating these researchers to experts in virtual human simulation, computational geometry and constraints solving enable us to contribute to enhance fundamental knowledge in human motion control.

3.2 Experiments in Virtual Reality

Understanding interactions between humans is challenging because it addresses many complex phenomena including perception, decision-making, cognition and social behaviors. Moreover, all these phenomena are difficult to isolate in real situations, and it is therefore highly complex to understand their individual influence on these human interactions. It is then necessary to find an alternative solution that can standardize the experiments and that allows the modification of only one parameter at a time. Video was first used since the displayed experiment is perfectly repeatable and cut-offs (stop of the video at a specific time before its end) allow having temporal information. Nevertheless, the absence of adapted viewpoint and stereoscopic vision does not provide depth information that are very meaningful. Moreover, during video recording sessions, a real human acts in front of a camera and not in front of an opponent. That interaction is then not a real interaction between humans.

Virtual Reality (VR) systems allow full standardization of the experimental situations and the complete control of the virtual environment. It allows to modify only one parameter at a time and observe its influence on the perception of the immersed subject. VR can then be used to understand what information is picked up to make a decision. Moreover, cut-offs can also be used to obtain temporal information about when information is picked up. When the subject can react as in a real situation, his movement (captured in real time) provides information about his reactions to the modified parameter. Not only is the perception studied, but the complete perception-action loop. Perception and action are indeed coupled and influence each other as suggested by Gibson in 1979.

Finally, VR allows the validation of virtual human models. Some models are indeed based on the interaction between the virtual character and the other humans, such as a walking model. In that case, there are two ways to validate it. First, they can be compared to real data (e.g. real trajectories of pedestrians). But such data are not always available and are difficult to get. The alternative solution is then to use VR. The validation of the realism of the model is then done by immersing a real subject into a virtual environment in which a virtual character is controlled by the model. Its evaluation is then deduced from how the immersed subject reacts when interacting with the model and how realistic he feels the virtual character is.

3.3 Computer Animation

Computer animation is the branch of computer science devoted to models for the representation and simulation of the dynamic evolution of virtual environments. A first focus is the animation of virtual characters (behavior and motion). Through a deeper understanding of interactions using VR and through better perceptive, biomechanical and motion control models to simulate the evolution of dynamic systems, the Mimetic team has the ability to build more realistic, efficient and believable animations. Perceptual study also enables us to focus computation time on relevant information (i.e. leading to ensure natural motion from the perceptual points of view) and save time for unperceived details. The underlying challenges are (i) the computational efficiency of the system which needs to run in real-time in many situations, (ii) the capacity of the system to generalise/adapt to new situations for which data were not available, or models were not defined for, and (iii) the variability of the models, i.e. their ability to handle many body morphologies and generate variations in motions that would be specific to each virtual character.

In many cases, however, these challenges cannot be addressed in isolation. Typically, character behaviors also depend on the nature and the topology of the environment they are surrounded by. In essence, a character animation system should also rely on smarter representations of the environments, in order to better perceive the environment itself, and take contextualised decisions. Hence the animation of virtual characters in our context often requires to be coupled with models to represent the environment, to reason, and to plan both at a geometric level (can the character reach this location), and at a semantic level (should it use the sidewalk, the stairs, or the road). This represents the second focus. Underlying challenges are the ability to offer a compact -yet precise- representation on which efficient path, motion planning and high-level reasoning can be performed.

Finally, a third scientific focus is digital storytelling. Evolved representations of motions and environments enable realistic animations. It is yet equally important to question how these events should be portrayed, when and under which angle. In essence, this means integrating discourse models into story models, the story representing the sequence of events which occur in a virtual environment, and the discourse representing how this story should be displayed (i.e. which events to show in which order and with which viewpoint). Underlying challenges are pertained to:

  • narrative discourse representations,
  • projections of the discourse into the geometry, planning camera trajectories and planning cuts between the viewpoints,
  • means to interactively control the unfolding of the discourse.

By therefore establishing the foundations to build bridges between the high-level narrative structures, the semantic/geometric planning of motions and events, and low-level character animations, the Mimetic team adopts a principled and all-inclusive approach to the animation of virtual characters.

4 Application domains

4.1 Animation, Autonomous Characters and Digital Storytelling

Computer Animation is one of the main application domains of the research work conducted in the MimeTIC team, in particular in relation to the entertainment and game industries. In these domains, creating virtual characters that are able to replicate real human motions and behaviours still highlights unanswered key challenges, especially as virtual characters are required to populate virtual worlds. For instance, virtual characters are used to replace secondary actors and generate highly populated scenes that would be hard and costly to produce with real actors. This requires to create high quality replicas that appear, move and behave both individually and collectively like real humans. The three key challenges for the MimeTIC team are therefore:

  • to create natural animations (i.e., virtual characters that move like real humans),
  • to create autonomous characters (i.e., that behave like real humans),
  • to orchestrate the virtual characters so as to create interactive stories.

First, our challenge is to create animations of virtual characters that are natural, i.e. moving like a real human would. This challenge covers several aspects of Character Animation depending on the context of application, e.g., producing visually plausible or physically correct motions, producing natural motion sequences, etc. Our goal is therefore to develop novel methods for animating virtual characters, based on motion capture, data-driven approaches, or learning approaches. However, because of the complexity of human motion (number of degrees of freedom that can be controlled), resulting animations are not necessarily physically, biomechanically, or visually plaisible. For instance, current physics-based approaches produce physically correct motions but not necessarily perceptually plausible ones. All these reasons are why most entertainment industries (gaming and movie production for example) still mainly rely on manual animation. Therefore, research in MimeTIC on character animation is also conducted with the goal of validating the results from objective standpoint (physical, biomechanical) as well as subjective one (visual plausibility).

Second, one of the main challenges in terms of autonomous characters is to provide a unified architecture for the modeling of their behavior. This architecture includes perception, action and decisional parts. This decisional part needs to mix different kinds of models, acting at different time scales and working with different natures of data, ranging from numerical (motion control, reactive behaviors) to symbolic (goal oriented behaviors, reasoning about actions and changes). For instance, autonomous characters play the role of actors that are driven by a scenario in video games and virtual storytelling. Their autonomy allows them to react to unpredictable user interactions and adapt their behavior accordingly. In the field of simulation, autonomous characters are used to simulate the behavior of humans in different kinds of situations. They enable to study new situations and their possible outcomes. In the MimeTIC team, our focus is therefore not to reproduce the human intelligence but to propose an architecture making it possible to model credible behaviors of anthropomorphic virtual actors evolving/moving in real-time in virtual worlds. The latter can represent particular situations studied by psychologists of the behavior or to correspond to an imaginary universe described by a scenario writer. The proposed architecture should mimic all the human intellectual and physical functions.

Finally, interactive digital storytelling, including novel forms of edutainment and serious games, provides access to social and human themes through stories which can take various forms and contains opportunities for massively enhancing the possibilities of interactive entertainment, computer games and digital applications. It provides chances for redefining the experience of narrative through interactive simulations of computer-generated story worlds and opens many challenging questions at the overlap between computational narratives, autonomous behaviours, interactive control, content generation and authoring tools. Of particular interest for the MimeTIC research team, virtual storytelling triggers challenging opportunities in providing effective models for enforcing autonomous behaviours for characters in complex 3D environments. Offering both low-level capacities to characters such as perceiving the environments, interacting with the environment itself and reacting to changes in the topology, on which to build higher-levels such as modelling abstract representations for efficient reasoning, planning paths and activities, modelling cognitive states and behaviours requires the provision of expressive, multi-level and efficient computational models. Furthermore virtual storytelling requires the seamless control of the balance between the autonomy of characters and the unfolding of the story through the narrative discourse. Virtual storytelling also raises challenging questions on the conveyance of a narrative through interactive or automated control of the cinematography (how to stage the characters, the lights and the cameras). For example, estimating visibility of key subjects, or performing motion planning for cameras and lights are central issues for which have not received satisfactory answers in the literature.

4.2 Fidelity of Virtual Reality

VR is a powerful tool for perception-action experiments. VR-based experimental platforms allow exposing a population to fully controlled stimuli that can be repeated from trial to trial with high accuracy. Factors can be isolated and objects manipulations (position, size, orientation, appearance, ..) are easy to perform. Stimuli can be interactive and adapted to participants’ responses. Such interesting features allow researchers to use VR to perform experiments in sports, motion control, perceptual control laws, spatial cognition as well as person-person interactions. However, the interaction loop between users and their environment differs in virtual conditions in comparison with real conditions. When a user interacts in an environment, movement from action and perception are closely related. While moving, the perceptual system (vision, proprioception,..) provides feedback about the users’ own motion and information about the surrounding environment. That allows the user to adapt his/her trajectory to sudden changes in the environment and generate a safe and efficient motion. In virtual conditions, the interaction loop is more complex because it involves several material aspects.

First, the virtual environment is perceived through a numerical display which could affect the available information and thus could potentially introduce a bias. For example, studies observed a distance compression effect in VR, partially explained by the use of a Head Mounted Display with reduced field of view and exerting a weight and torques on the user’s head. Similarly, the perceived velocity in a VR environment differs from the real world velocity, introducing an additional bias. Other factors, such as the image contrast, delays in the displayed motion and the point of view can also influence efficiency in VR. The second point concerns the user’s motion in the virtual world. The user can actually move if the virtual room is big enough or if wearing a head mounted display. Even with a real motion, authors showed that walking speed is decreased, personal space size is modified and navigation in VR is performed with increased gait instability. Although natural locomotion is certainly the most ecological approach, the physical limited size of VR setups prevents from using it most of the time. Locomotion interfaces are therefore required. They are made up of two components, a locomotion metaphor (device) and a transfer function (software), that can also introduce bias in the generated motion. Indeed, the actuating movement of the locomotion metaphor can significantly differ from real walking, and the simulated motion depends on the transfer function applied. Locomotion interfaces cannot usually preserve all the sensory channels involved in locomotion.

When studying human behavior in VR, the aforementioned factors in the interaction loop potentially introduce bias both in the perception and in the generation of motor behavior trajectories. MimeTIC is working on the mandatory step of VR validation to make it usable for capturing and analyzing human motion.

4.3 Motion Sensing of Human Activity

Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user's body parts. Whatever the system is, one of the main problems is to be able to automatically recognize and analyze the user's performance according to poor and noisy signals. Human activity and motion are subject to variability: intra-variability due to space and time variations of a given motion, but also inter-variability due to different styles and anthropometric dimensions. MimeTIC has addressed the above problems in two main directions.

First, we have studied how to recognize and quantify motions performed by a user when using accurate systems such as Vicon (product of Oxford Metrics), Qualisys, or Optitrack (product of Natural Point) motion capture systems. These systems provide large vectors of accurate information. Due to the size of the state vector (all the degrees of freedom) the challenge is to find the compact information (named features) that enables the automatic system to recognize the performance of the user. Whatever the method used, finding these relevant features that are not sensitive to intra-individual and inter-individual variability is a challenge. Some researchers have proposed to manually edit these features (such as a Boolean value stating if the arm is moving forward or backward) so that the expertise of the designer is directly linked with the success ratio. Many proposals for generic features have been proposed, such as using Laban notation which was introduced to encode dancing motions. Other approaches tend to use machine learning to automatically extract these features. However most of the proposed approaches were used to seek a database for motions, whose properties correspond to the features of the user's performance (named motion retrieval approaches). This does not ensure the retrieval of the exact performance of the user but a set of motions with similar properties.

Second, we wish to find alternatives to the above approach which is based on analyzing accurate and complete knowledge of joint angles and positions. Hence new sensors, such as depth-cameras (Kinect, product of Microsoft) provide us with very noisy joint information but also with the surface of the user. Classical approaches would try to fit a skeleton into the surface in order to compute joint angles which, again, lead to large state vectors. An alternative would be to extract relevant information directly from the raw data, such as the surface provided by depth cameras. The key problem is that the nature of these data may be very different from classical representation of human performance. In MimeTIC, we try to address this problem in some application domains that require picking specific information, such as gait asymmetry or regularity for clinical analysis of human walking.

4.4 Sports

Sport is characterized by complex displacements and motions. One main objective is to understand the determinants of performance through the analysis of the motion itself. In the team, different sports have been studied such as the tennis serve, where the goal was to understand the contribution of each segment of the body in the performance but also the risk of injuries as well as other situation in cycling, swimming, fencing or soccer. Sport motions are dependent on visual information that the athlete can pick up in his environment, including the opponent's actions. Perception is thus fundamental to the performance. Indeed, a sportive action, as unique, complex and often limited in time, requires a selective gathering of information. This perception is often seen as a prerogative for action. It then takes the role of a passive collector of information. However, as mentioned by Gibson in 1979, the perception-action relationship should not be considered sequentially but rather as a coupling: we perceive to act but we must act to perceive. There would thus be laws of coupling between the informational variables available in the environment and the motor responses of a subject. In other words, athletes have the ability to directly perceive the opportunities of action directly from the environment. Whichever school of thought considered, VR offers new perspectives to address these concepts by complementary using real time motion capture of the immersed athlete.

In addition to better understand sports and interactions between athletes, VR can also be used as a training environment as it can provide complementary tools to coaches. It is indeed possible to add visual or auditory information to better train an athlete. The knowledge found in perceptual experiments can be for example used to highlight the body parts that are important to look at, in order to correctly anticipate the opponent's action.

4.5 Ergonomics

The design of workstations nowadays tends to include assessment steps in a Virtual Environment (VE) to evaluate ergonomic features. This approach is more cost-effective and convenient since working directly on the Digital Mock-Up (DMU) in a VE is preferable to constructing a real physical mock-up in a Real Environment (RE). This is substantiated by the fact that a Virtual Reality (VR) set-up can be easily modified, enabling quick adjustments of the workstation design. Indeed, the aim of integrating ergonomics evaluation tools in VEs is to facilitate the design process, enhance the design efficiency, and reduce the costs.

The development of such platforms asks for several improvements in the field of motion analysis and VR. First, interactions have to be as natural as possible to properly mimic the motions performed in real environments. Second, the fidelity of the simulator also needs to be correctly evaluated. Finally, motion analysis tools have to be able to provide in real-time biomechanics quantities usable by ergonomists to analyse and improve the working conditions.

In real working condition, motion analysis and musculoskeletal risk assessment raise also many scientific and technological challenges. Similarly to virtual reality, fidelity of the working process may be affected by the measurement method. Wearing sensors or skin markers, together with the need of frequently calibrating the assessment system may change the way workers perform the tasks. Whatever the measurement is, classical ergonomic assessments generally address one specific parameter, such as posture, or force, or repetitions..., which makes it difficult to design a musculoskeletal risk factor that actually represents this risk. Another key scientific challenge is then to design new indicators that better capture the risk of musculoskeletal disorders. However, this indicator has to deal with the trade-off between accurate biomechanical assessment and the difficulty to get reliable and required information in real working conditions.

4.6 Locomotion and Interactions between walkers

Modeling and simulating locomotion and interactions between walkers is a very active, complex and competitive domain, being investigating by various disciplines such as mathematics, cognitive sciences, physics, computer graphics, rehabilitation etc. Locomotion and interactions between walkers are by definition at the very core of our society since they represent the basic synergies of our daily life. When walking in the street, we should produce a locomotor movement while taking information about our surrounding environment in order to interact with people, move without collision, alone or in a group, intercept, meet or escape to somebody. MimeTIC is an international key contributor in the domain of understanding and simulating locomotion and interactions between walkers. By combining an approach based on Human Movement Sciences and Computer Sciences, the team focuses on locomotor invariants which characterize the generation of locomotor trajectories, conducts challenging experiments focusing on visuo-motor coordination involved during interactions between walkers both using real and virtual set-ups. One main challenge is to consider and model not only the "average" behaviour of healthy younger adult but also extend to specific populations considering the effect of pathology or the effect of age (kids, older adults). As a first example, when patients cannot walk efficiently, in particular those suffering from central nervous system affections, it becomes very useful for practitioners to benefit from an objective evaluation of their capacities. To facilitate such evaluations, we have developed two complementary indices, one based on kinematics and the other one on muscle activations. One major point of our research is that such indices are usually only developed for children whereas adults with these affections are much more numerous. We extend this objective evaluation by using person-person interaction paradigm which allows studying visuo-motor strategies deficit in these specific populations.

Another fundamental question is the adaptation of the walking pattern according to anatomical constraints, such as pathologies in orthopedics, or adaptation to various human and non-human primates in paleoanthropoly. Hence, the question is to predict plausible locomotion according to a given morphology. This question raises fundamental questions about the variables that are regulated to control gait: balance control, minimum energy, minimum jerk...In MimeTIC we develop models and simulators to efficiently test hypothesis on gait control for given morphologies.

5 Highlights of the year

  • HORIZON-CL4-2022-HUMAN-01-14 "ShareSpace - Embodied Social Experiences in Hybrid Shared Spaces" project, accepted, leaded by DFKI. Will start in January 2023.
  • Anne-Hélène Olivier, Marc Christie and Ludovic Hoyet have moved to the VIRTUS team, led by Julien Pettré, since July 1st, 2022.

5.1 Awards

  • Best paper awards at the International Conference on Interactive Media, Smart Systems and Emerging Technologies IMET-2022 for the paper entitled "Coupling dense point cloud correspondence and template model fitting for 3D human pose and shape reconstruction from a single depth image"

6 New software and platforms

6.1 New software

6.1.1 AsymGait

  • Name:
    Asymmetry index for clinical gait analysis based on depth images
  • Keywords:
    Motion analysis, Kinect, Clinical analysis
  • Scientific Description:
    The system uses depth images delivered by the Microsoft Kinect to retrieve the gait cycles first. To this end it is based on a analyzing the knees trajectories instead of the feet to obtain more robust gait event detection. Based on these cycles, the system computes a mean gait cycle model to decrease the effect of noise of the system. Asymmetry is then computed at each frame of the gait cycle as the spatial difference between the left and right parts of the body. This information is computed for each frame of the cycle.
  • Functional Description:
    AsymGait is a software package that works with Microsoft Kinect data, especially depth images, in order to carry-out clinical gait analysis. First it identifies the main gait events using the depth information (footstrike, toe-off) to isolate gait cycles. Then it computes a continuous asymmetry index within the gait cycle. Asymmetry is viewed as a spatial difference between the two sides of the body.
  • Contact:
    Franck Multon
  • Participants:
    Edouard Auvinet, Franck Multon

6.1.2 Cinematic Viewpoint Generator

  • Keyword:
    3D animation
  • Functional Description:
    The software, developed as an API, provides a mean to automatically compute a collection of viewpoints over one or two specified geometric entities, in a given 3D scene, at a given time. These viewpoints satisfy classical cinematographic framing conventions and guidelines including different shot scales (from extreme long shot to extreme close-up), different shot angles (internal, external, parallel, apex), and different screen compositions (thirds,fifths, symmetric of di-symmetric). The viewpoints allow to cover the range of possible framings for the specified entities. The computation of such viewpoints relies on a database of framings that are dynamically adapted to the 3D scene by using a manifold parametric representation and guarantee the visibility of the specified entities. The set of viewpoints is also automatically annotated with cinematographic tags such as shot scales, angles, compositions, relative placement of entities, line of interest.
  • Contact:
    Marc Christie
  • Participants:
    Christophe Lino, Emmanuel Badier, Marc Christie
  • Partners:
    Université d'Udine, Université de Nantes

6.1.3 CusToM

  • Name:
    Customizable Toolbox for Musculoskeletal simulation
  • Keywords:
    Biomechanics, Dynamic Analysis, Kinematics, Simulation, Mechanical multi-body systems
  • Scientific Description:

    The present toolbox aims at performing a motion analysis thanks to an inverse dynamics method.

    Before performing motion analysis steps, a musculoskeletal model is generated. Its consists of, first, generating the desire anthropometric model thanks to models libraries. The generated model is then kinematical calibrated by using data of a motion capture. The inverse kinematics step, the inverse dynamics step and the muscle forces estimation step are then successively performed from motion capture and external forces data. Two folders and one script are available on the toolbox root. The Main script collects all the different functions of the motion analysis pipeline. The Functions folder contains all functions used in the toolbox. It is necessary to add this folder and all the subfolders to the Matlab path. The Problems folder is used to contain the different study. The user has to create one subfolder for each new study. Once a new musculoskeletal model is used, a new study is necessary. Different files will be automaticaly generated and saved in this folder. All files located on its root are related to the model and are valuable whatever the motion considered. A new folder will be added for each new motion capture. All files located on a folder are only related to this considered motion.

  • Functional Description:
    Inverse kinematics Inverse dynamics Muscle forces estimation External forces prediction
  • Publications:
  • Contact:
    Charles Pontonnier
  • Participants:
    Antoine Muller, Charles Pontonnier, Georges Dumont, Pierre Puchaud, Anthony Sorel, Claire Livet, Louise Demestre

6.1.4 Directors Lens Motion Builder

  • Keywords:
    Previzualisation, Virtual camera, 3D animation
  • Functional Description:
    Directors Lens Motion Builder is a software plugin for Autodesk's Motion Builder animation tool. This plugin features a novel workflow to rapidly prototype cinematographic sequences in a 3D scene, and is dedicated to the 3D animation and movie previsualization industries. The workflow integrates the automated computation of viewpoints (using the Cinematic Viewpoint Generator) to interactively explore different framings of the scene, proposes means to interactively control framings in the image space, and proposes a technique to automatically retarget a camera trajectory from one scene to another while enforcing visual properties. The tool also proposes to edit the cinematographic sequence and export the animation. The software can be linked to different virtual camera systems available on the market.
  • Contact:
    Marc Christie
  • Participants:
    Christophe Lino, Emmanuel Badier, Marc Christie
  • Partner:
    Université de Rennes 1

6.1.5 Kimea

  • Name:
    Kinect IMprovement for Egronomics Assessment
  • Keywords:
    Biomechanics, Motion analysis, Kinect
  • Scientific Description:
    Kimea consists in correcting skeleton data delivered by a Microsoft Kinect in an ergonomics purpose. Kimea is able to manage most of the occlultations that can occur in real working situation, on workstations. To this end, Kimea relies on a database of examples/poses organized as a graph, in order to replace unreliable body segments reconstruction by poses that have already been measured on real subject. The potential pose candidates are used in an optimization framework.
  • Functional Description:
    Kimea gets Kinect data as input data (skeleton data) and correct most of measurement errors to carry-out ergonomic assessment at workstation.
  • Publications:
  • Contact:
    Franck Multon
  • Participants:
    Franck Multon, Hubert Shum, Pierre Plantard
  • Partner:
    Faurecia

6.1.6 Populate

  • Keywords:
    Behavior modeling, Agent, Scheduling
  • Scientific Description:

    The software provides the following functionalities:

    - A high level XML dialect that is dedicated to the description of agents activities in terms of tasks and sub activities that can be combined with different kind of operators: sequential, without order, interlaced. This dialect also enables the description of time and location constraints associated to tasks.

    - An XML dialect that enables the description of agent's personal characteristics.

    - An informed graph describes the topology of the environment as well as the locations where tasks can be performed. A bridge between TopoPlan and Populate has also been designed. It provides an automatic analysis of an informed 3D environment that is used to generate an informed graph compatible with Populate.

    - The generation of a valid task schedule based on the previously mentioned descriptions.

    With a good configuration of agents characteristics (based on statistics), we demonstrated that tasks schedules produced by Populate are representative of human ones. In conjunction with TopoPlan, it has been used to populate a district of Paris as well as imaginary cities with several thousands of pedestrians navigating in real time.

  • Functional Description:
    Populate is a toolkit dedicated to task scheduling under time and space constraints in the field of behavioral animation. It is currently used to populate virtual cities with pedestrian performing different kind of activities implying travels between different locations. However the generic aspect of the algorithm and underlying representations enable its use in a wide range of applications that need to link activity, time and space. The main scheduling algorithm relies on the following inputs: an informed environment description, an activity an agent needs to perform and individual characteristics of this agent. The algorithm produces a valid task schedule compatible with time and spatial constraints imposed by the activity description and the environment. In this task schedule, time intervals relating to travel and task fulfillment are identified and locations where tasks should be performed are automatically selected.
  • Contact:
    Fabrice Lamarche
  • Participants:
    Carl-Johan Jorgensen, Fabrice Lamarche

6.1.7 PyNimation

  • Keywords:
    Moving bodies, 3D animation, Synthetic human
  • Scientific Description:
    PyNimation is a python-based open-source (AGPL) software for editing motion capture data which was initiated because of a lack of open-source software enabling to process different types of motion capture data in a unified way, which typically forces animation pipelines to rely on several commercial software. For instance, motions are captured with a software, retargeted using another one, then edited using a third one, etc. The goal of Pynimation is therefore to bridge the gap in the animation pipeline between motion capture software and final game engines, by handling in a unified way different types of motion capture data, providing standard and novel motion editing solutions, and exporting motion capture data to be compatible with common 3D game engines (e.g., Unity, Unreal). Its goal is also simultaneously to provide support to our research efforts in this area, and it is therefore used, maintained, and extended to progressively include novel motion editing features, as well as to integrate the results of our research projects. At a short term, our goal is to further extend its capabilities and to share it more largely with the animation/research community.
  • Functional Description:

    PyNimation is a framework for editing, visualizing and studying skeletal 3D animations, it was more particularly designed to process motion capture data. It stems from the wish to utilize Python’s data science capabilities and ease of use for human motion research.

    In its version 1.0, Pynimation offers the following functionalities, which aim to evolve with the development of the tool : - Import / Export of FBX, BVH, and MVNX animation file formats - Access and modification of skeletal joint transformations, as well as a certain number of functionalities to manipulate these transformations - Basic features for human motion animation (under development, but including e.g. different methods of inverse kinematics, editing filters, etc.). - Interactive visualisation in OpenGL for animations and objects, including the possibility to animate skinned meshes

  • URL:
  • Authors:
    Ludovic Hoyet, Robin Adili, Benjamin Niay, Alberto Jovane
  • Contact:
    Ludovic Hoyet

6.1.8 The Theater

  • Keywords:
    3D animation, Interactive Scenarios
  • Functional Description:
    The Theater is a software framework to develop interactive scenarios in virtual 3D environements. The framework provides means to author and orchestrate 3D character behaviors and simulate them in real-time. The tools provides a basis to build a range of 3D applications, from simple simulations with reactive behaviors, to complex storytelling applications including narrative mechanisms such as flashbacks.
  • Contact:
    Marc Christie
  • Participant:
    Marc Christie

6.2 New platforms

6.2.1 Immerstar Platform

Participants: Georges Dumont [contact], Ronan Gaugne, Anthony Sorel, Richard Kulpa.

With the two platforms of virtual reality, Immersia) and Immermove Immermove, grouped under the name Immerstar, the team has access to high level scientific facilities. This equipment benefits the research teams of the center and has allowed them to extend their local, national and international collaborations. The Immerstar platform was granted by a Inria CPER funding for 2015-2019 that enabled important evolutions of the equipment. In 2016, the first technical evolutions have been decided and, in 2017, these evolutions have been implemented. On one side, for Immermove, the addition of a third face to the immersive space, and the extension of the Vicon tracking system have been realized and continued this year with 23 new cameras. And, on the second side, for Immersia, the installation of WQXGA laser projectors with augmented global resolution, of a new tracking system with higher frequency and of new computers for simulation and image generation in 2017. In 2018, a Scale One haptic device has been installed. It allows, as in the CPER proposal, one or two handed haptic feedback in the full space covered by Immersia and possibility of carrying the user. Based on these supports, in 2020, we participated to a PIA3-Equipex+ proposal. This proposal CONTINUUM involves 22 partners, has been succesfully evaluated and will be granted. The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding. The kick-off meeting of continuum has been held in 2022, January the 14th. A global meeting was held in 2022, July the 5th and 6th.

7 New results

7.1 Outline

In 2022, MimeTIC has maintained its activity in motion analysis, modelling and simulation, to support the idea that these approaches are strongly coupled in a motion analysis-synthesis loop. This idea has been applied to the main application domains of MimeTIC:

  • Animation, Autonomous Characters and Digital Storytelling,
  • Fidelity of Virtual Reality,
  • Motion sensing of Human Activity,
  • Sports,
  • Ergonomics,
  • Locomotion and Interactions Between Walkers.

7.2 Animation, Autonomous Characters and Digital Storytelling

MimeTIC main research path consists in associating motion analysis and synthesis to enhance the naturalness in computer animation, with applications in camera control, movie previsualisation, and autonomous virtual character control. Thus, we pushed example-based techniques in order to reach a good trade-off between simulation efficiency and naturalness of the results. In 2022, to achieve this goal, MimeTIC continued to explore the use of perceptual studies and model-based approaches, but also began to investigate deep learning to generate plausible behaviors.

7.2.1 Study on Automatic 3D Facial Caricaturization: From Rules to Deep Learning

Participants: Ludovic Hoyet, Franck Multon [contact], Nicolas Olivier.

Figure 4
Figure 4: Results of our novel user-controlled rule-based approach. Each pair (A, B, C, and D) presents the input facial scan (wired on the left) and its automatically generated caricature on the right.

Facial caricature is the art of drawing faces in an exaggerated way to convey emotions such as humor or sarcasm. Automatic caricaturization has been explored both in the 2D and 3D domain. In this work 28, we propose two novel approaches to automatically caricaturize input facial scans, filling gaps in the literature in terms of user-control, caricature style transfer, and exploring the use of deep learning for 3D mesh caricaturization. The first approach is a gradient-based differential deformation approach with data driven stylization (Figure 4). It is a combination of two deformation processes: facial curvature and proportions exaggeration. The second approach is a GAN for unpaired face-scan-to-3D-caricature translation. We leverage existing facial and caricature datasets, along with recent domain-to-domain translation methods and 3D convolutional operators, to learn to caricaturize 3D facial scans in an unsupervised way. To evaluate and compare these two novel approaches with the state of the art, we conducted the first user study of facial mesh caricaturization techniques, with 49 participants. It highlights the subjectivity of the caricature perception and the complementarity of the methods. Finally, we provide insights for automatically generating caricaturized 3D facial mesh.

7.2.2 AIP: Adversarial Interaction Priors for Multi-Agent Physics-based Character Control

Participants: Mohamed Younes, Franck Multon [contact], Richard Kulpa.

Figure 5

Simulated shadowboxing interactions between two physics-based characters.

Figure 5: Simulated shadowboxing interactions between two physics-based characters.

We address the problem of controlling and simulating interactions between multiple physics-based characters, using short unlabeled motion clips 58. We propose Adversarial Interaction Priors (AIP), a multi-agents generative adversarial imitation learning (MAGAIL) approach, which extends recent deep reinforcement learning (RL) works aiming at imitating single character example motions. The main contribution of this work is to extend the idea of motion imitation of a single character to interaction imitation between multiple characters. Our method uses a control policy for each character to imitate interactive behaviors provided by short example motion clips, and associates a discriminator for each character, which is trained on actor-specific interactive motion clips. The discriminator returns interaction rewards that measure the similarity between generated behaviors and demonstrated ones in the reference motion clips. The policies and discriminators are trained in a multi-agent adversarial reinforcement learning procedure, to improve the quality of the behaviors generated by each agent. The initial results show the effectiveness of our method on the interactive task of shadowboxing between two fighters.

7.2.3 Interaction Fields: Intuitive Sketch-based Steering Behaviors for Crowd Simulation

Participants: Marc Christie, Adèle Colas, Ludovic Hoyet [contact], Anne-Hélène Olivier, Katja Zibrek.

Figure 6
Figure 6: We present interaction fields (IFs) for sketch-based design of local agent interactions in crowd simulation. Left: A user sketches guide curves (shown in blue), which are converted to an IF grid. The purpose of this specific IF is to let agents move behind one object to hide from another. Middle: 2D top view of the simulation. We let all gray obstacles and orange agents emit this IF. The blue agent perceives these IFs, causing it to hide from the red agent. Right: 3D impression of this scenario, combined with body animation per agent.

The real-time simulation of human crowds has many applications. In a typical crowd simulation, each person (`agent') in the crowd moves towards a goal while adhering to local constraints. Many algorithms exist for specific local `steering' tasks such as collision avoidance or group behavior. However, these do not easily extend to completely new types of behavior, such as circling around another agent or hiding behind an obstacle. They also tend to focus purely on an agent's velocity without explicitly controlling its orientation. This work 14 presents a novel sketch-based method for modelling and simulating many steering behaviors for agents in a crowd (Figure 6). Central to this is the concept of an interaction field (IF): a vector field that describes the velocities or orientations that agents should use around a given `source' agent or obstacle. An IF can also change dynamically according to parameters, such as the walking speed of the source agent. IFs can be easily combined with other aspects of crowd simulation, such as collision avoidance. Using an implementation of IFs in a real-time crowd simulation framework, we demonstrate the capabilities of IFs in various scenarios. This includes game-like scenarios where the crowd responds to a user-controlled avatar. We also present an interactive tool that computes an IF based on input sketches. This IF editor lets users intuitively and quickly design new types of behavior, without the need for programming extra behavioral rules. We thoroughly evaluate the efficacy of the IF editor through a user study, which demonstrates that our method enables non-expert users to easily enrich any agent-based crowd simulation with new agent interactions. This work was performed in collaboration with Julien Pettré from the Virtus team.

7.2.4 Dynamic Combination of Crowd Steering Policies Based on Context

Participants: Ludovic Hoyet [contact].

Figure 7

Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQFS_{QF} according to a quality function (a) Agents are steered using the Power Law model (PL, blue). Note that some agents are dragged along a diagonal towards the up-right direction, hence deviating them from their goal. PL (SQF=67S_{QF}=67)

Figure 7: Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQF according to a quality function (a) Agents are steered using the Power Law model (PL, blue). Note that some agents are dragged along a diagonal towards the up-right direction, hence deviating them from their goal. PL (SQF=67)
Figure 8

Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQFS_{QF} according to a quality function. (c) Using our approach, characters dynamically switch motion policy depending on their local context, hence overcoming the motion artifacts. In this example, characters use the PL model, but switch to RVO in the 90º crossing context. The agents' color encodes their current policy. Our dynamic adaptation results in an increase of the overall quality score, SQFS_{QF}. PL and RVO (SQF=92S_{QF}=92)

Figure 8: Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQF according to a quality function. (c) Using our approach, characters dynamically switch motion policy depending on their local context, hence overcoming the motion artifacts displayed in (a) and (b). In this example, characters use the PL model, but switch to RVO in the 90º crossing context. The agents' color encodes their current policy. Our dynamic adaptation results in an increase of the overall quality score, SQF. PL and RVO (SQF=92).
Figure 9

Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQFS_{QF} according to a quality function. (c) Using our approach, characters dynamically switch motion policy depending on their local context, hence overcoming the motion artifacts displayed in (a) and (b). In this example, characters use the PL model, but switch to RVO in the 90º crossing context. The agents' color encodes their current policy. Our dynamic adaptation results in an increase of the overall quality score, SQFS_{QF}.

Figure 9: Simulation of two agent flows crossing at 90º using three different motion strategies, and their respective score SQF according to a quality function. (c) Using our approach, characters dynamically switch motion policy depending on their local context, hence overcoming the motion artifacts displayed in (a) and (b). In this example, characters use the PL model, but switch to RVO in the 90º crossing context. The agents' color encodes their current policy. Our dynamic adaptation results in an increase of the overall quality score, SQF.

Simulating crowds requires controlling a very large number of trajectories of characters and is usually performed using crowd steering algorithms. The question of choosing the right algorithm with the right parameter values is of crucial importance given the large impact on the quality of results. In this work 12, we study the performance of a number of steering policies (i.e., simulation algorithm and its parameters) in a variety of contexts, resorting to an existing quality function able to automatically evaluate simulation results. This analysis allows us to map contexts to the performance of steering policies. Based on this mapping, we demonstrate that distributing the best performing policies among characters improves the resulting simulations. Furthermore, we also propose a solution to dynamically adjust the policies, for each agent independently and while the simulation is running, based on the local context each agent is currently in. We demonstrate significant improvements of simulation results compared to previous work that would optimize parameters once for the whole simulation, or pick an optimized, but unique and static, policy for a given global simulation context (Figures 789 ). This work was performed in collaboration with Julien Pettré from the Virtus team.

7.2.5 Impact of Self-Contacts on Perceived Pose Equivalences

Participants: Franck Multon [contact], Jean Basset, Adnane Boukhayma, Ludovic Hoyet.

In 2018, we introduced the idea of context graph to capture the relationship between body parts surfaces and enhance the quality of the motion retargetting problem. Hence, it becomes possible to retarget the motion of a source character to a target one while preserving the topological relationship between body parts surfaces. However this approach implies to strictly satisfy distance constraints between body parts, whereas some of them could be relaxed to preserve naturalness. In 2019, we introduced a new paradigm based on transfering the shape instead of encoding the pose constraints to tackle this problem for isolated poses.

In 2020, we extended this approach to handle continuity in motion, and non-human characters. this idea resulted from the collaboration with the Morpheo Inria team in Grenoble, in the context of the IPL AVATAR project. The proposed approaches were based on a nonlinear optimization framework, which involved manually editing the constraints to satisfy, and required a huge amount of computation time.

In 2021, to achieve the deformation transfer, we proposed a neural encoder-decoder architecture where only identity information is encoded and where the decoder is conditioned on the pose. We use pose independent representations, such as isometry-invariant shape characteristics, to represent identity features. The model is depicted in Figure 10. Our model uses these features to supervise the prediction of offsets from the deformed pose to the result of the transfer. We show experimentally that our method outperforms state-of-the-art methods both quantitatively and qualitatively, and generalizes better to poses not seen during training. We also introduce a fine-tuning step that allows to obtain competitive results for extreme identities, and allows to transfer simple clothing.

Defining equivalences between poses of different human characters is an important problem for imitation research, human pose recognition and deformation transfer. However, pose equivalence is a subjective information that depends on context and on the morphology of the characters. A common hypothesis is that interactions between body surfaces, such as self-contacts, are important attributes of human poses, and are therefore consistently included in animation approaches aiming at retargeting human motions. However, some of these self-contacts are only present because of the morphology of the character and are not important to the pose, e.g. contacts between the upper arms and the torso during a standing A-pose. In 37, we conduct a first study towards the goal of understanding the impact of self-contacts between body surfaces on perceived pose equivalences. More specifically, we focus on contacts between the arms or hands and the upper body, which are frequent in everyday human poses. We conduct a study where we present to observers two models of a character mimicking the pose of a source character, one with the same self-contacts as the source, and one with one self-contact removed, and ask observers to select which model best mimics the source pose. We show that while poses with different self-contacts are considered different by observers in most cases, this effect is stronger for self-contacts involving the hands than for those involving the arms

Figure 10

Overview of the proposed approach. The encoder (green) generates an identity code for the target. We feed this code to the decoder (red) along with the source, which is concatenated with the decoder features at all resolution stages. The decoder finally outputs per vertex offsets from the input source towards the identity transfer result.

Figure 10: Overview of the proposed approach. The encoder (green) generates an identity code for the target. We feed this code to the decoder (red) along with the source, which is concatenated with the decoder features at all resolution stages. The decoder finally outputs per vertex offsets from the input source towards the identity transfer result.

7.3 Fidelity of Virtual Reality

MimeTIC wishes to promote the use of Virtual Reality to analyze and train human motor performance. It raises the fundamental question of the transfer of knowledge and skills acquired in VR to real life. In 2022, we maintained our efforts to enhance the experience of users when interacting physically with a virtual world.

7.3.1 The Stare-in-the-Crowd Effect in Virtual Reality

Participants: Marc Christie, Ludovic Hoyet, Alberto Jovane, Anne-Hélène Olivier [contact], Pierre Raimbaud, Katja Zibrek.

Figure 11
Figure 11: The stare-in-the-crowd effect describes the tendency of humans in noticing and observing, more frequently and for longer time, gazes oriented toward them (directed gaze) than gazes directed elsewhere (averted gaze). This work analyses the presence of such an effect in VR and its relationship with social anxiety levels. The figure above shows an example of the user's view during our experiment. All agents, except the woman in the front row wearing a black jacket, have their gaze averted.

Nonverbal cues are paramount in real-world interactions. Among these cues, gaze has received much attention in the literature. In particular, previous work has shown a search asymmetry between directed and averted gaze towards the observer using photographic stimuli, with faster detection and longer fixation towards directed gaze by the observer. This is known as the stare-in-the-crowd effect. In this work 38, we investigate whether stare-in-the crowd effect is preserved in Virtual Reality (VR). To this end, we designed a within-subject experiment where 30 human users were immersed in a virtual environment in front of an audience of 11 virtual agents (Figure 11), following 4 different gaze behaviours . We analysed the user's gaze behaviour when observing the audience, computing fixations and dwell time. We also collected the users' social anxiety score using a post-experiment questionnaire to control for some potential influencing factors. Results show that the stare-in-the-crowd effect is preserved in VR, as demonstrated by the significant differences between gaze behaviours, similarly to what was found in previous studies using photographic stimuli. Additionally, we found a negative correlation between dwell time towards directed gazes and users’ social anxiety scores. Such results are encouraging for the development of expressive and reactive virtual humans, which can be animated to express natural interactive behaviour. This work was performed in collaboration with Julien Pettré from the Virtus team and Claudio Pacchierotti from the Rainbow team.

7.3.2 The One-Man-Crowd: Single User Generation of Crowd Motions Using Virtual Reality

Participants: Marc Christie, Ludovic Hoyet [contact], Tairan Yin.

Figure 12
Figure 12: Snapshot of crowd motions generated using the one-man-crowd approach. A single user successively embodies each displayed virtual agent in the order indicated by the highlighting colour (from blue to yellow). We studied 3 scenarios that replicated existing experiments from left to right: circular unidirectional flow, bottleneck situation, inflow behaviour (entering a lift).

Crowd motion data is fundamental for understanding and simulating realistic crowd behaviours. Such data is usually collected through controlled experiments to ensure that both desired individual interactions and collective behaviours can be observed. It is however scarce, due to ethical concerns and logistical difficulties involved in its gathering, and only covers a few typical crowd scenarios. In this work 35, we propose and evaluate a novel Virtual Reality based approach lifting the limitations of real-world experiments for the acquisition of crowd motion data. Our approach immerses a single user in virtual scenarios where he/she successively acts as each crowd member. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the user progressively builds the overall crowd behaviour by him/herself. We validate the feasibility of our approach by replicating three real experiments (Figure 12), and compare both the resulting emergent phenomena and the individual interactions to existing real datasets. Our results suggest that realistic collective behaviours can naturally emerge from virtual crowd data generated using our approach, even though the variety in behaviours is lower than in real situations. These results provide valuable insights to the building of virtual crowd experiences, and reveal key directions for further improvements. This work was performed in collaboration with Julien Pettré from the Virtus team and Marie-Paule Cani from LIX.

7.3.3 Do You Need Another Hand? Investigating Dual Body Representations During Anisomorphic 3D Manipulation

Participants: Ludovic Hoyet [contact].

Figure 13

Illustration of two types of dual body representation studied in this work . On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user's position. On the right image, a ghost representation provides feedback with respect to the user's real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion.

Figure 13: Illustration of two types of dual body representation studied in this work 20. On the left image, a ghost representation enables remote manipulation while a realistic co-located representation provides feedback with respect to the real user's position. On the right image, a ghost representation provides feedback with respect to the user's real position while a realistic representation enables precise manipulation with the environment thanks to slowed motion.

In virtual reality, several manipulation techniques distort users’ motions, for example to reach remote objects or increase precision. These techniques can become problematic when used with avatars, as they create a mismatch between the real performed action and the corresponding displayed action, which can negatively impact the sense of embodiment. In this work 20, we propose to use a dual representation during anisomorphic interaction. A co-located representation serves as a spatial reference and reproduces the exact users’ motion, while an interactive representation is used for distorted interaction (Figure 13). We conducted two experiments, investigating the use of dual representations with amplified motion (with the Go-Go technique) and decreased motion (with the PRISM technique). Two visual appearances for the interactive representation and the co-located one were explored. This exploratory study investigating dual representations in this context showed that people globally preferred having a single representation, but opinions diverged for the Go-Go technique. Also, we could not find significant differences in terms of performance. While interacting seemed more important than showing exact movements for agency during out-of-reach manipulation, people felt more in control of the realistic arm during close manipulation. This work was performed in collaboration with Ferran Argelaguet and Anatole Lécuyer from the Hybrid team.

7.4 Motion Sensing of Human Activity

MimeTIC has a long experience in motion analysis in laboratory condition. In the MimeTIC project, we proposed to explore how these approaches could be transferred to ecological situations, with a lack of control on the experimental conditions. In 2022, we continued to explore the use of deep learning techniques to capture human performance based on simple RGB or depth images. We also continued exploring how customizing complex musculoskeletal models with simple calibration processes. We also investigated the use of machine learning to access parameters that could not be measured directly.

7.4.1 Coupling dense point cloud correspondence and template model fitting for 3D human pose and shape reconstruction from a single depth image

Participants: Adnane Boukhayma, Franck Multon [contact].

Figure 14
Figure 14: (a) Input depth image, b) DoubleUnet, stacked two Unets to infer segmentation and color map regression; c) embeded color: first three channels encode human part, last-three channels encode pixel normalized distance; d) SMPL fitting e) Outputs : 3D human shape.

In this work 40, we address the problem of capturing both the shape and the pose of a character using a single depth sensor. Some previous works proposed to fit a parametric generic human template in the depth image, while others developed deep learning (DL) approaches to find the correspondence between depth pixels and vertices of the template. In this paper, we explore the possibility of combining these two approaches to benefit from their respective advantages. The hypothesis is that DL dense correspondence should provide more accurate information to template model fitting, compared to previous approaches which only use estimated joint position only. Thus, we stacked a state-of-the-art DL dense correspondence method (namely double U- Net) and parametric model fitting (namely Simplify-X). The experiments on the SURREAL [1], DFAUST datasets [2] and a subset of AMASS [3], show that this hybrid approach enables us to enhance pose and shape estimation compared to using DL or model fitting separately. This result opens new perspectives in pose and shape estimation in many applications where complex or invasive motion capture set-ups are impossible, such as sports, dance, ergonomic assessment, etc.

This work was part of the European project SCHEDAR 36, funded by ANR, and leaded by Cyprus University. This work was performed in collaboration with University of Reims Champagne Ardennes.

7.4.2 Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space

Participants: Adnane Boukhayma [contact], Amine Ouasfi.

We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feature space, that we approach using gradient based meta-learning. We use a convolutional encoder to build a feature space given the input point cloud. An implicit decoder learns to predict signed distance values given points represented in this feature space. Setting the input point cloud, i.e. samples from the target shape function's zero level set, as the support (i.e. context) in few-shot learning terms, we train the decoder such that it can adapt its weights to the underlying shape of this context with a few (5) tuning steps. We thus combine two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning. Our numerical and qualitative evaluation shows that in the context of implicit reconstruction from a sparse point cloud, our proposed strategy, i.e. meta-learning in feature space, outperforms existing alternatives, namely standard supervised learning in feature space, and meta-learning in euclidean space, while still providing fast inference.

Figure 15

Overview of our method. Our input is a sparse point cloud (Support 𝐒i\mathbf {S}_i) and our output is an implicit neural SDF ff. ff is a neural network comprised of a convolutional encoder (top in gray) and an MLP decoder (bottom in gray). The decoder predicts SDF values for 3D points (red/blue circles) through their spatially sampled features (squares in shades of red/blue) from the encoder's activation maps. Following a gradient-based few-shot learning algorithm (MAML), we learn a meta-decoder in encoder feature space, parameterized with θ\theta , that can quickly adapt to a new shape, i.e. new parameters ϕi\phi _i, given its support. This is achieved by iterating per-shape 5-step adaptation gradient descent (orange arrow) using the support loss ℒSi\mathcal {L}_{S_i}, and one-step meta gradient-descent (green arrow) by back-propagating the Query set (QiQ_i) loss ℒQi\mathcal {L}_{Q_i} evaluated with the specialized parameters ϕi\phi _i w.r.t. the meta-parameters θ\theta . At test time, 5 fine-tuning iterations are performed similarly starting from the converged meta-model to evaluate ff.

Figure 15: Overview of our method. Our input is a sparse point cloud (Support 𝐒i) and our output is an implicit neural SDF f. f is a neural network comprised of a convolutional encoder (top in gray) and an MLP decoder (bottom in gray). The decoder predicts SDF values for 3D points (red/blue circles) through their spatially sampled features (squares in shades of red/blue) from the encoder's activation maps. Following a gradient-based few-shot learning algorithm (MAML), we learn a meta-decoder in encoder feature space, parameterized with θ, that can quickly adapt to a new shape, i.e. new parameters ϕi, given its support. This is achieved by iterating per-shape 5-step adaptation gradient descent (orange arrow) using the support loss Si, and one-step meta gradient-descent (green arrow) by back-propagating the Query set (Qi) loss Qi evaluated with the specialized parameters ϕi w.r.t. the meta-parameters θ. At test time, 5 fine-tuning iterations are performed similarly starting from the converged meta-model to evaluate f.

7.4.3 Neural Mesh-Based Graphics

Participants: Adnane Boukhayma [contact], Franck Multon, Shubhendu Jena.

We revisit NPBG (Neural Point based Graphics), the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG, which has been trained on ScanNet and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset (DTU and Tanks and Temples) and then scene finetuned, in spite of their deeper neural renderer.

Figure 16
Figure 16: Differently from NPBG (Neural Point based Graphics), we introduce mesh-based view-dependent feature rasterization. We learn per-point Spherical Harmonic coefficients, interpolated via the barycentric coordinates of the ray-face intersection.

7.4.4 Musculosekeletal analysis library developments

Participants: Claire Livet, Georges Dumont [contact], Charles Pontonnier [contact].

Our team develops since a few years a musculoskeletal analysis toolbox (CusToM, that serves at a basis for many different PhD works now. We continue to enhance the capabilities of this toolbox by integrating PhD works. This year, we worked specifically on the development of closed loop models and optimisation methods. An alternative method for solving constrained multibody kinematics optimisation using a penalty method on constraints and a Levenberg–Marquardt algorithm is proposed. It is compared to an optimisation resolution with hard kinematic constraints. These methods are applied to two pairs of experiments and models. The penalty method was at least 20 times faster than the optimisation resolution while keeping similar reconstruction errors and constraints violation. The potential of the method is shown to accurately solve the multibody kinematics optimisation problem in a reasonable amount of time. A computational gain lies in implementing this resolution with a compiled and optimised program code 24. The work of Claire Livet was defended as a PhD Thesis work 48.

7.4.5 pressure insoles assessment for external forces prediction

Participants: Pauline Morin, Georges Dumont [contact], Charles Pontonnier [contact].

In motion analysis studies, classical inverse dynamics methods require knowledge of the ground reaction forces and moments (GRF&M) to compute internal forces. Force platforms are considered as the gold standard to measure GRF&M applied to the feet but they reduce the ecological aspect of the experimental conditions by limiting the analysis area. Estimating external forces from motion data and dynamic equations circumvents this limitation at the expense of accuracy. In such an estimation method, the inverse dynamics problem is undetermined since contact is modelled by multiple points representing the potential ground-foot contact area. We have investigated the potential of pressure insoles to detect contact in an external force estimation method. Estimating the foot center of pressure (CoP) position by pressure insoles appears to be an interesting technical solution to perform motion analysis beyond the force platforms surface area. The aim of the published work 27 was to estimate the CoP position from Moticon® pressure insoles during sidestep cuts, runs and walks. The CoP positions assessed from force platform data and from pressure insole data were compared. One calibration trial performed on the force platforms was used to localize the insoles in the reference coordinate system. The most accurate results were obtained when the motion performed during the calibration trial was similar to the motion under study. In such a case, mean accuracy of CoP position have been evaluated to 15 4 mm along anteroposterior (AP) axis and 8.5 3 mm along mediolateral (ML) axis for sidestep cuts, 18 5 mm along AP axis and 7.3 4 mm along ML axis for runs, 15 6 mm along AP axis and 6.6 3 mm along ML axis for walks. The accuracy of the CoP position assesment from pressure insoles data increased with the vertical force applied to the pressure insole and with the number of pressure cells involved. The accuracy of the pressure insoles depends on the calibration trial used for the localization of the insoles in the reference coordinate system. The most accurate results were obtained when the motion performed during the calibration trial was similar to the motion under study.

7.5 Sports

MimeTIC promotes the idea of coupling motion analysis and synthesis in various domains, especially sports. More specifically, we have a long experience and international leadership in using Virtual Reality for analyzing and training sports performance. In 2022, we continued to explore how enhancing the use of VR to design original training system. More specifically we addressed the problem of early motion recognition to make a virtual opponent react to the user's action before it ends. We also used VR as a mean to analyze perception in sports, or to train anticipation skills by introducing visual artifacts in the VR experience.

We also initiated some simulation work to better charactierize the interaction between a user and his physical environment, such as interactions between swimmers and diving boards.

7.5.1 Early Recognition of Untrimmed Handwritten Gestures with Spatio-Temporal 3D CNN

Participants: Richard Kulpa [contact], William Mocaer.

Early recognition of untrimmed handwritten gestures is the task of recognizing as soon as possible gestures drawn in a continuous stream, one after another. This challenge is similar to early human wholebody motion recognition. This is particularly challenging for multi-touch gestures because it is impossible to know when the gesture has started and finished. For mono-stroke gestures, in an application context where the finger is never removed from the device between gestures, the recognition is even more complex. In this work we present an extension of the Online Long-Term Convolutional 3D (OLT-C3D) network to address the task of early recognition of untrimmed gestures which have been addressed by very few works. To evaluate our approach, we created two synthetic datasets using freely available benchmarks, MTGSetB and ILGDB, simulating the streaming data in two different application scenarios 41. Furthermore, we propose a new evaluation metric for this specific task. Our approach achieves good performances on the two new datasets and will be a baseline for future works on this challenging task.

7.5.2 Multiple Players Tracking in Virtual Reality: Influence of Soccer Specific Trajectories and Relationship With Gaze Activity

Participants: Richard Kulpa [contact], Anthony Sorel, Annabelle Limballe, Benoit Bideau, Alexandre Vu.

The perceptual-cognitive ability to track multiple moving objects and its contribution to team sports performance has traditionally been studied in the laboratory under non-sports specific conditions. It is thus questionable whether the measured visual tracking performance and the underlying gaze activity reflected the actual ability of team sports players to track teammates and opponents on a real field. Using a Virtual Reality-based visual tracking task, the ability of participants to track multiple moving virtual players as they would do on a soccer field was observed to pursue two objectives. (i) See the influence of different scenario types (soccer-specific trajectories versus pseudo-random trajectories) on the visual tracking performance of soccer ( n = 15) compared to non-soccer players ( n = 16). (ii) Observe the influence of spatial features of the simulated situations on gaze activity between soccer players and non-soccer players. (i) The linear mixed model regression revealed a significant main effect of the group but no interaction effect between group and the type of trajectories, suggesting that the visual tracking ability of soccer players did not benefit from their specific knowledge when they faced scenarios with real game trajectories. (ii) Virtual players’ spatial dispersion and crowding affected the participants’ gaze activity and their visual tracking performance. Furthermore, the gaze activity of soccer players differed in some aspects from the gaze activity of non-soccer players. Assumptions are formulated as to the implication of these results in the difference in visual tracking performance between soccer players and non-soccer players. Overall, using soccer-specific trajectories might not be enough to replicate the representativeness of the field conditions in the study of visual tracking performance 33. Multitasking constraints should be considered along with motor-cognitive dual-tasks in future research to develop the representativeness of visual exploration conditions.

7.5.3 Visual tracking assessment in a soccer-specific virtual environment: A web-based study

Participants: Richard Kulpa [contact], Anthony Sorel, Annabelle Limballe, Benoit Bideau, Alexandre Vu.

The ability to track teammates and opponents is an essential quality to achieve a high level of performance in soccer. The visual tracking ability is usually assessed in the laboratory with non-sport specific scenarios, leading in two major concerns. First, the methods used probably only partially reflects the actual ability to track players on the field. Second, it is unclear whether the situational features manipulated to stimulate visual tracking ability match those that make it difficult to track real players. In this study, participants had to track multiple players on a virtual soccer field 32. The virtual players moved according to either real or pseudo-random trajectories. The experiment was conducted online using a web application. Regarding the first concern, the visual tracking performance of players in soccer, other team sports, and non-team sports were compared to see if differences between groups varied with the use of soccer-specific or pseudo-random movements. Contrary to our assumption, the ANOVA did not reveal a greater tracking performance difference between soccer players and the two other groups when facing stimuli featuring movements from actual soccer games compared to stimuli featuring pseudo-random ones. Directing virtual players with real-world trajectories did not appear to be sufficient to allow soccer players to use soccer-specific knowledge in their visual tracking activity. Regarding the second concern, an original exploratory analysis based on Hierarchical Clustering on Principal Components was conducted to compare the situational features associated with hard-to-track virtual players in soccer-specific or pseudorandom movements. It revealed differences in the situational feature sets associated with hard-to-track players based on movement type. Essentially with soccer-specific movements, how the virtual players were distributed in space appeared to have a significant influence on visual tracking performance. These results highlight the need to consider real-world scenarios to understand what makes tracking multiple players difficult.

7.5.4 Using Blur for Perceptual Investigation and Training in Sport? A Clear Picture of the Evidence and Implications for Future Research

Participants: Richard Kulpa [contact], Annabelle Limballe.

Dynamic, interactive sports require athletes to identify, pick-up and process relevant information in a very limited time, in order to then make an appropriate response. Perceptual-cognitive skills are, therefore, a key determinant of elite sporting performance. Recently, sport scientists have investigated ways to assess and train perceptual-cognitive skills, with one such method involving the use of blurred stimuli. Here, we describe the two main methods used to generate blur (i.e., dioptric and Gaussian) and then review the current findings in a sports context 23. Overall, it has been shown the use of blur can enhance performance and learning of sporting tasks in novice participants, especially when the blur is applied to peripheral stimuli. However, while intermediate and expert level participants are relatively impervious to the presence of blur, it remains to be determined if they are positive effects on learning. In a final section, we describe some of the methodological issues that limit the application of blur and then discuss the potential use of virtual reality to extend the current research base in sporting contexts.

7.5.5 Acceptance by athletes of a virtual reality head-mounted display intended to enhance sport performance

Participants: Richard Kulpa [contact].

While a growing number of studies have highlighted the potential of virtual reality (VR) to improve athletes’ skills, no research has yet focused on acceptance of a VR head-mounted display (VR-HMD) designed to increase sport performance. However, even if technological devices could potentially lead to performance improvement, athletes may not always accept them. To investigate this issue, the Technology Acceptance Model (TAM) examines if perceived usefulness, perceived ease of use, perceived enjoyment, and subjective norms (i.e., social influence) are positive predictors of intention to use a specific technology. The aims of the present study 26 were to test with competitive athletes the validity of the TAM before a first use of a VR-HMD intended to enhance sport performance and to examine to what extent the level of practice and the type of sport practiced have an influence on the previous variables of the TAM. The study sample comprised 1162 French athletes (472 women, 690 men, Mage = 24.50 ± 8.51 years) who usually practiced a sport in competition (from recreational to international level). After reading a short text presenting the VR-HMD and its interests for sport performance, the participants filled out an online questionnaire assessing their acceptance of this technological device before a first use. The results of the structural equation modeling analysis revealed that perceived usefulness, perceived ease of use, perceived enjoyment, and subjective norms were positive predictors of intention to use this VR-HMD, validating the suitability of the TAM for investigating the acceptance by athletes of a VR-HMD designed to increase their sport performance. The results also showed that athletes of all sport levels (a) had a significant intention to use VR, (b) found it quite useful (except for recreational athletes), quite easy to use, and quite pleasant to use, even if their entourage would not encourage them to use it (except for international athletes), and (c) found the VR-HMD easy and pleasant to use whatever the sport practiced. Notably some athletes (e.g., triathletes, swimmers, cyclists) did not find the VR-HMD significantly useful and did not have significant intention to use it to increase their performance. Identifying acceptance by athletes of such a device may increase the likelihood that it will be used by athletes of different levels and from different sports, so that they can benefit from all its advantages related to the improvement of their sport performance. Needs-based targeted interventions may also be conducted toward athletes who might be reluctant to integrate this type of device into their training.

7.5.6 Influence of shoe torsional stiffness on foot and ankle biomechanics during tennis forehand strokes

Participants: Anthony Sorel [contact], Richard Kulpa, Benoit Bideau, Pierre Touzard.

Tennis shoe characteristics need to minimise the risk of athletes suffering ankle injuries and improve players' feet performance. This study aims to evaluate the influence of shoe torsional stiffness on running velocity, stance duration, ground reaction forces and ankle biomechanics during two different tennis forehand runs and strokes 25. Ten right-handed advanced male tennis players performed two specific tennis forehand runs and strokes at maximal effort (a shuttle run with a defensive open stance forehand - SRDF and a lateral jab run with an offensive open stance forehand - JROF) with four different pairs of tennis shoes with different torsional stiffness. A force platform measured ground reaction forces (GRF). A motion capture system recorded the 3D trajectories of markers located on players' anatomical landmarks. The minimum, maximum angle value, and range of motion were computed using inverse kinematics for each rotation axis of the right ankle. Normalised maximal ankle torques were also computed using inverse dynamics. Shoe torsional stiffness had no effect on running velocity, on stance duration and maximal values of GRF. Shoe torsional stiffness influenced forefoot inversion which was significantly higher for the most flexible shoes. For SRDF, the maximal ankle inversion angle was significantly and largely increased for the stiffest shoe. The stiffest shoe may put the ankle at a higher risk of lateral sprains during SRDF while it was not the case during JROF.

7.5.7 Automatic Swimming Activity Recognition and Lap Time Assessment Based on a Single IMU: A Deep Learning Approach

Participants: Nicolas Bideau [contact], Guillaume Nicolas, Benoit Bideau, Erwan Delhaye, Antoine Bouvet.

We have introduced a deep learning model devoted to the analysis of swimming using a single Inertial Measurement Unit (IMU) attached to the sacrum 16. Gyroscope and accelerometer data were collected from 35 swimmers with various expertise levels during a protocol including the four swimming techniques. The proposed methodology took high inter- and intra-swimmer variability into account and was set up for the purpose of predicting eight swimming classes (the four swimming techniques, rest, wallpush, underwater, and turns) at four swimming velocities ranging from low to maximal. The overall F1-score of classification reached 0.96 with a temporal precision of 0.02 s. Lap times were directly computed from the classifier thanks to a high temporal precision and validated against a video gold standard. The mean absolute percentage error (MAPE) for this model against the video was 1.15%, 1%, and 4.07%, respectively, for starting lap times, middle lap times, and ending lap times. This model is a first step toward a powerful training assistant able to analyze swimmers with various levels of expertise in the context of in situ training monitoring.

7.5.8 Diving board characterization for diving analysis

Participants: Georges Dumont [contact], Charles Pontonnier, Guillaume Nicolas, Nicolas Bideau, Louise Demestre, Pauline Morin.

In this study we developed several approaches aimed at understanding the interaction between a diver and its diving board during olympic diving. Such developments may be really useful to enhance the athlete's performance by pointing specifics actions (impulse, arm motion, leg/arm synchronization) to both the trainer and to the athlete. We created a finite element based diving board model and used experimental data to characterize it. The external force applied to the diving board was computed thanks to an external forces prediction method minimizing the dynamics residuals on the diver during the jump. Results were published in two journal papers and a conference 17, 18, 19 and Louise Demestre defended her thesis in 2022 46.

7.5.9 Gymnastics landings Analysis

Participants: Diane Haering [contact], Charles Pontonnier, Rebecca Crolan.

The thesis of Rebecca Crolan deals with the study of the link between landing strategies, fatigue and back pain in gymnastics. With regard to this objective, we studied the dissipation power of a landing mat to understand how much was dissipated from the athlete strategy with regard to the material 15. We as well studied in a preliminary study with a simplified protocol how fatigue induced changes in the motor strategy of the athletes during simple landings 21.

7.6 Ergonomics

Ergonomics has become an important application domain in MimeTIC: being able to capture, analyze, and model human performance at work. In this domain, key challenge consists in using limited equipment to capture the physical activity of workers in real conditions. Hence, in 2022, we have explored how simulation could help to support ergonomics in the specific case of interaction between a user and a physical system, such as a weelchair or an exoskeleton.

7.6.1 Exoskeleton biomechanical impact assessment

Participants: Charles Pontonnier [contact], Divyaksh Subhash Chander.

The interaction of an exoskeleton with the worker raises many issues such as joint misalignment, force transfer, control design… To properly detect such issues is a keystone to assist the user efficiently. The prototyping of such systems is also based on the caracterization for the task to assist, therefore we developed a protocol of assessment of meat cutting tasks in biomechanical terms to develop consistent specifications with regard to the objecitves of the EXOSCARNE project. It has been partially published in 13.

7.6.2 Wheelchair locomotion ergonomics

Participants: Charles Pontonnier [contact], Georges Dumont, Claire Livet.

With regard to the objectives of the capacities project, we aimed at developing a synthetic costs of the physical demand in manual wheelchair locomotion. With regard to this objective, we made a review of the biomechanical evaluation of different environmental barriers encountered in the urban environment (slope, curb...) that has been published here 30. An abstract dealing with methodological issues (soft tissues artifacts compensation) to study wheelchair locomotion has also been published here 29.

7.7 Locomotion and Interactions between Walkers

MimeTIC is a leader in the study and modeling of walkers' visuomotor strategies. This implies to understand how humans generate their walking trajectories within an environment. In 2022, we proposed a new analysis of emergent patterns in crossing flows of pedestrians, and the adaptation of collision avoidance strategies for athletes with a previous concussion. Let us highlight here that this topic has now moved to the VIRTUS team, created in June 2022.

7.7.1 Collision avoidance strategies between two athlete walkers: Understanding impaired avoidance behaviours in athletes with a previous concussion

Participants: Anne-Hélène Olivier [contact], Armel Crétual.

Figure 17

Left: Illustration of the real world image of two athletes interacting within the experiment to understand the effect of concussion on visuo-motor strategies. Right: Relative contributions to solve the collision depending on the group of athletes interacting. When 2 concussed athletes are involved in the interaction, the variability is increased .

Figure 17: Left: Illustration of the real world image of two athletes interacting within the experiment to understand the effect of concussion on visuo-motor strategies. Right: Relative contributions to solve the collision depending on the group of athletes interacting. When 2 concussed athletes are involved in the interaction, the variability is increased 31.

Individuals who have sustained a concussion often display associated balance control deficits and visuomotor impairments despite being cleared by a physician to return to sport. Such visuomotor impairments can be highlighted in collision avoidance tasks that involves a mutual adaptation between two walkers. However, studies have yet to challenge athletes with a previous concussion during an everyday collision avoidance task, following return to sport. In this project performed in the frame of the Inria Bear Associate team, we investigated whether athletes with a previous concussion display associated behavioural changes during a 90 degree collision avoidance task with an approaching pedestrian (Cf. Figure 17, Left). To this end, thirteen athletes (ATH; 9 females, 23±4years) and 13 athletes with a previous concussion (CONC; 9 females, 22±3 years, concussion <6 months) walked at a comfortable walking speed along a 12.6m pathway while avoiding another athlete on a 90 degree collision course. Each participant randomly interacted with individuals from the same group 20 times and interacted with individuals from the opposite group 21 times. Minimum predicted distance (mpd) was used to examine collision avoidance behaviours between ATH and CONC groups. The overall progression of mpd(t) did not differ between groups (p>.05). During the collision avoidance task, previously concussed athletes contributed less when passing second compared to their peers(p<.001). When two previously concussed athletes were on a collision course, there was a greater amount of variability resulting in inappropriate adaptive behaviours (Cf. Figure 17 Right). Although successful at avoiding a collision with an approaching athlete, previously concussed athletes exhibit behavioural changes manifesting in riskier behaviours. The current findings suggest that previously concussed athletes possess behavioural changes even after being cleared to returned to sport, which may increase their risk of a subsequent injury when playing.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Cifre Moovency - Critère basé effort pour l'analyse in-situ de l'activité physique au travail : Application au port de charge bimanuel

Participants: Franck Multon [contact], Georges Dumont, Charles Pontonnier, Hasnaa Ouadoudi Belabizoui.

This Cifre contract has started in January 2022 for three years and is funding the PhD thesis of Hasnaa Ouadoudi-Belabizoui. It consists in building robust AI-based methods able to emulate inverse dynamics results from noisy-incomplete data to study the physical constraints of the operators in industrial workplaces. Indeed, ergonomics of such workplaces need to be assessed at the effort level and no reliable method enables such an assessment in-situ from a motion monitoring. The thesis aims at developing neural networks able to reproduce the results of a model based inverse dynamics method and then at constructing a reliable and synthetic indicator of the forces undergone by the operator during the execution of a given task.

The Cifre contracts funds the PhD salary and 10K€ per year for the supervision and management of the PhD thesis.

Cifre InterDigitial - Adaptive Avatar Customization for Immersive Experiences

Participants: Franck Multon [contact], Ludovic Hoyet, Nicolas Olivier.

This Cifre contract has started in February 2019 for three years and is funding the PhD thesis of Nicolas Olivier. The aim of the project is to design stylized avatars of users in immersive environment and digital arts such as videogames or cinema.

To this end, we will design a pipeline from motion and shape capture of the user to the simulation of the 3D real-time and stylized avatar. It will take hairs, eyes, face, body shape and motion into account. The key idea is to stylized both appaearance and motion to make avatar better correspond to the style of the movie of immersive experience. We will carry-out perceptual studies to better understand the expectation of the users when controlling stylized avatars, to maximize embodiment. The Cifre contracts funds the PhD salary and 15K€ per year for the supervision and management of the PhD thesis. This contract is also in collaboration with Hybrid team. The PhD has been defended in March 2022.

Cifre InterDigitial - Learning-Based Human Character Animation Synthesis for Content Production

Participants: Ludovic Hoyet [contact], Lucas Mourot.

The overall objective of the PhD thesis of Lucas Mourot, which started in June 2020, is to adapt and improve the state-of-art on video animation and human motion modelling to develop a semi-automated framework for human animation synthesis that brings real benefits to artists in the movie and advertising industry. In particular, one objective is to leverage the use of novel learning-based approaches, in order to propose skeleton-based animation representations, as well as editing tools, in order to improve the resolution and accuracy of the produced animations, so that automatically synthetized animations might become usable in an interactive way by animation artists.

8.2 Bilateral grants with industry

Collaboration with company SolidAnim (Bordeaux, France)

Participants: Marc Christie [contact], Xi Wang.

This contract started in November 2019 for three years. Its purpose is to explore novel means of performing depth detection for augmented reality applied to the film and broadcast industries. The grant serves to fund the PhD of Xi Wang. (160kE)

Collaboration Unreal company (Unreal Megagrant)

Participants: Marc Christie [contact].

This contract started in September 2020 for two years. The objective is to explore means of designing novel VR manipulators for character animation tasks in Unreal Engine. (70kE)

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Inria associate team not involved in an IIL or an international program

BEAR

Participant: Anne-Hélène Olivier [contact].

  • Title:
    from BEhavioral Analysis to modeling and simulation of interactions between walkeRs
  • Duration:
    2019 -> 2022
  • Coordinator:
    Michael Cinelli (mcinelli@wlu.ca)
  • Partners:
    • Wilfrid Laurier University
  • Inria contact:
    Anne-Hélène Olivier
  • Summary:
    BEAR project (from BEhavioral Analysis to modeling and simulation of interactions between walkeRs) is a collaborative project between France (Inria Rennes) and Canada (Wilfrid Laurier University and Waterloo University), dedicated to the simulation of human behavior during interactions between pedestrians. In this context, the project aims at providing more realistic models and simulation by considering the strong coupling between pedestrians’ visual perception and their locomotor adaptations.

9.2 International research visitors

9.2.1 Visits of international scientists

Other international visits to the team
Bradford McFadyen
  • Status
    (Professor)
  • Institution of origin:Dept. Rehabilitation, Université Laval; Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec
  • Country: Canada
  • Dates: March 2nd to April 30th, 2022
  • Context of the visit: To develop a collaborative research project aiming at studying the influence of a brain injury on the control of locomotion in the context of a navigation task with other walkers. To do so, we propose a methodology combining Movement Sciences and Digital Sciences that will allow us to study the locomotor behavior and the gaze behavior of patients in interaction with a virtual crowd.
  • Mobility program/type of mobility:
    (sabbatical)
Michael Cinelli
  • Status
    (Professor)
  • Institution of origin: Wilfrid Laurier University, Waterloo
  • Country: Canada
  • Dates: May 29th to June 12th, 2022
  • Context of the visit: This stay is part of the associated team BEAR. The goal of this project is to study the influence of ageing on locomotion control in a navigation task in interaction with other walkers, in particular the role of appearance vs. movement. During this stay an experimental campaign was conducted.
  • Mobility program/type of mobility:
    (BEAR Inria Associate Team)
Sheryl Bourgaize
  • Status
    (PhD Student)
  • Institution of origin: Wilfrid Laurier University, Waterloo
  • Country: Canada
  • Dates: May 29th to June 12th, 2022
  • Context of the visit: This stay is part of the associated team BEAR. The goal of this project is to study the influence of advancing age on locomotion control in a navigation task in interaction with other walkers, in particular the role of appearance vs. movement. During this stay an experimental campaign was conducted.
  • Mobility program/type of mobility:
    (BEAR Inria Associate Team)
Brooke Thompson
  • Status
    (Master Student)
  • Institution of origin: Wilfrid Laurier University, Waterloo
  • Country: Canada
  • Dates: May 1st to July 24th, 2022
  • Context of the visit: This stay is part of the associated team BEAR. The goal of this project is to study the influence of ageing on locomotion control in a navigation task in interaction with other walkers, in particular the role of initial crossing order in the interaction. During this stay an experimental campaign was conducted.
  • Mobility program/type of mobility:
    (MITACS)
Gabriel Massarroto
  • Status
    (Master Student)
  • Institution of origin: Wilfrid Laurier University, Waterloo
  • Country: Canada
  • Dates: May 1st to July 24th, 2022
  • Context of the visit: This stay is part of the associated team BEAR. The goal of this project is to study the influence of ageing on locomotion control in a navigation task in interaction with other walkers, in particular the role of initial crossing order in the interaction. During this stay an experimental campaign was conducted.
  • Mobility program/type of mobility:
    (MITACS)
Megan Hammill
  • Status
    (Master Student)
  • Institution of origin: Wilfrid Laurier University, Waterloo
  • Country: Canada
  • Dates: May 1st to July 24th, 2022
  • Context of the visit: This stay is part of the associated team BEAR. The goal of this project is to study the influence of ageing on locomotion control in a navigation task in interaction with other walkers, in particular the role of initial crossing order in the interaction. During this stay an experimental campaign was conducted.
  • Mobility program/type of mobility:
    (MITACS)

9.3 European initiatives

9.3.1 Horizon Europe

CLIPE

Participant: Vicenzo Abichequer Sangalli, Marc Christie, Ludovic Hoyet, Tairan Yin.

  • Title:
    Creating Lively Interactive Populated Environments
  • Duration:
    2020 - 2024
  • Coordinator:
    University of Cyprus
  • Partners:
    • University of Cyprus (CY)
    • Universitat Politecnica de Catalunya (ES)
    • INRIA (FR)
    • University College London (UK)
    • Trinity College Dublin (IE)
    • Max Planck Institute for Intelligent Systems (DE)
    • KTH Royal Institute of Technology, Stockholm (SE)
    • Ecole Polytechnique (FR)
    • Silversky3d (CY)
  • Inria contact:
    Julien Pettré, team Virtus
  • Summary:
    CLIPE is an Innovative Training Network (ITN) funded by the Marie Skłodowska-Curie program of the European Commission. The primary objective of CLIPE is to train a generation of innovators and researchers in the field of virtual characters simulation and animation. Advances in technology are pushing towards making VR/AR worlds a daily experience. Whilst virtual characters are an important component of these worlds, bringing them to life and giving them interaction and communication abilities requires highly specialized programming combined with artistic skills, and considerable investments: millions spent on countless coders and designers to develop video-games is a typical example. The research objective of CLIPE is to design the next-generation of VR-ready characters. CLIPE is addressing the most important current aspects of the problem, making the characters capable of: behaving more naturally; interacting with real users sharing a virtual experience with them; being more intuitively and extensively controllable for virtual worlds designers. To meet our objectives, the CLIPE consortium gathers some of the main European actors in the field of VR/AR, computer graphics, computer animation, psychology and perception. CLIPE also extends its partnership to key industrial actors of populated virtual worlds, giving students the ability to explore new application fields and start collaborations beyond academia. This work is performed in collaboration with Julien Pettré from Virtus team. Website
CrowdDNA

Participant: Ludovic Hoyet, Charles Pontonnier, Anne-Hélène Olivier.

  • Title:
    CrowdDNA
  • Duration:
    2020 - 2024
  • Coordinator:
    Inria
  • Partners:
    • Inria (Fr)
    • ONHYS (FR)
    • University of Leeds (UK)
    • Crowd Dynamics (UK)
    • Universidad Rey Juan Carlos (ES)
    • Forschungszentrum Jülich (DE)
    • Universität Ulm (DE)
  • Inria contact:
    Julien Pettré, team Virtus
  • Summary:
    This project aims to enable a new generation of “crowd technologies”, i.e., a system that can prevent deaths, minimize discomfort and maximize efficiency in the management of crowds. It performs an analysis of crowd behaviour to estimate the characteristics essential to understand its current state and predict its evolution. CrowdDNA is particularly concerned with the dangers and discomforts associated with very high-density crowds such as those that occur at cultural or sporting events or public transport systems. The main idea behind CrowdDNA is that analysis of new kind of macroscopic features of a crowd – such as the apparent motion field that can be efficiently measured in real mass events – can reveal valuable information about its internal structure, provide a precise estimate of a crowd state at the microscopic level, and more importantly, predict its potential to generate dangerous crowd movements. This way of understanding low-level states from high-level observations is similar to that humans can tell a lot about the physical properties of a given object just by looking at it, without touching. CrowdDNA challenges the existing paradigms which rely on simulation technologies to analyse and predict crowds, and also require complex estimations of many features such as density, counting or individual features to calibrate simulations. This vision raises one main scientific challenge, which can be summarized as the need for a deep understanding of the numerical relations between the local – microscopic – scale of crowd behaviours (e.g., contact and pushes at the limb scale) and the global – macroscopic – scale, i.e. the entire crowd. This work is performed in collaboration with Julien Pettré from Virtus team.

9.3.2 H2020 projects

PRESENT

Participant: Adèle Colas, Ludovic Hoyet, Alberto Jovane, Anne-Hélène Olivier, Pierre Raimbaud, Katja Zibrek.

  • Title:
    Photoreal REaltime Sentient ENTity
  • Duration:
    2019 - 2023
  • Coordinator:
    Universitat Pompeu Fabra
  • Partners:
    • Framestore (UK)
    • Brainstorm (ES)
    • Cubic Motion (UK)
    • InfoCert (IT)
    • Universitat Pompeu Fabra (ES)
    • Universität Augsburg (DE)
    • Inria (FR)
    • CREW (BE)
  • Inria contact:
    Julien Pettré, team Virtus
  • Summary:
    PRESENT is a three-year Research and Innovation project to create virtual digital companions –– embodied agents –– that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media. There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and –– above all -— to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour. PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. This work is performed in collaboration with Julien Pettré from Virtus team. Website
SCHEDAR

Participant: Franck Multon [contact], Richard Kulpa, Xiaofang Wang.

  • Title:
    Safeguarding the Cultural HEritage of Dance through Augmented Reality
  • Duration:
    June 2018- June 2022
  • Coordinator:
    Cyprus University
  • Partners:
    • RISE, UNIVERSITY OF CYPRUS, (Cyprus)
    • ALGOLYSIS LTD (Cyprus)
    • UNIVERSITY OF WARWICK (UK)
    • CRESTIC, UNIVERSITE DE REIMS CHAMPAGNE ARDENNES (France)
    • M2S/MIMETIC, UNIVERSITE RENNES2 (France)
  • Inria contact:
    Franck MULTON
  • Summary:
    Dance is an integral part of any culture. Through its choreography and costumes dance imparts richness and uniqueness to that culture. Over the last decade, technological developments have been exploited to record, curate, remediate, provide access, preserve and protect tangible Cultural Heritage. However, intangible assets, such as dance, has largely been excluded from this previous work. Recent computing advances have enabled the accurate 3D digitization of human motion. Such systems provide a new means for capturing, preserving and subsequently re-creating Intanguible Cultural Heritage which goes far beyond traditional written or imaging approaches. However, 3D motion data is expensive to create and maintain, encompassed semantic information is difficult to extract and formulate, and current software tools to search and visualize this data are too complex for most end-users. SCHEDAR will provide novel solutions to the three key challenges of archiving, re-using and re-purposing, and ultimately disseminating ICH motion data. In addition, we will devise a comprehensive set of new guidelines, a framework and software tools for leveraging existing ICH motion databases. Data acquisition will be undertaken holistically; encompassing data related to the performance, the performer, the kind of the dance, the hidden/untold story, etc. Innovative use of state-of-the-art multisensory Augmented Reality technology will enable direct interaction with the dance, providing new experiences and training in traditional dance which is key to ensure this rich culture asset is preserved for future generations. MimeTIC is responsible for WP3 "Dance Data Enhancement".
ForEVR
  • Title:
    Appealing Character Design for Embodied Virtual Reality
  • Duration:
    From September 1, 2021 to August 31, 2023
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
  • Inria contact:
    Katja Zibrek
  • Coordinator:
    INRIA
  • Summary:
    The ForEVR project is focused on delivering a radically improved interaction with another user or virtual character in immersive Embodied Virtual Reality (EVR). EVR offers the possibility of interacting with another in a computer-generated world by using virtual humans, and the interaction can closely resemble real-life, physical interaction. Embodied virtual interaction creates a powerful illusion of presence with another and is a valuable candidate for the alternative to physical meetings. A strikingly overlooked problem of such interactive systems is the representation of humans in virtual environments. Current designs are driven by commercial models, which are frequently subjected to bias and are not appropriate for non-entertainment applications. On the other hand, characters which appear almost human can cause discomfort and negative evaluation from the users. Since virtual reality can induce realistic responses in people, inappropriate character representations can have psychologically damaging effects on the users. The ForEVR project proposes a radical approach to solving the problem of character design by focusing on the appeal of character motion to compensate for the issues created by the realistic appearance. It will employ rigorous methods to define, implement and evaluate motion processing techniques for realistic characters in order to improve their overall appeal, while being suitable for non-entertainment applications. Users’ response to the characters will be evaluated using standard measures from psychology and novel measures of EVR. The ForEVR project joins interdisciplinary (psychology/computer graphics) and involves one of the foremost research institutes with the research team specializing in creating behavior for autonomous virtual humans, while collaborating with an industry partner who is developing EVR for mental health and rehabilitation applications. This project will help me achieve academic independence needed for a tenure position. Website
INVICTUS
  • Title:
    Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling
  • Duration:
    From October 1, 2020 to December 31, 2022
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • VOLOGRAMS LIMITED, Ireland
    • FRAUNHOFER GESELLSCHAFT ZUR FORDERUNG DER ANGEWANDTEN FORSCHUNG EV (FHG), Germany
    • INTERDIGITAL R and D FRANCE, France
    • UBISOFT MOTION PICTURES (UBISOFT MOTION PICTURES SARL), France
    • UNIVERSITE DE RENNES I (UR1), France
  • Inria contact:
    Ludovic Hoyet
  • Coordinator:
    UNIVERSITE DE RENNES I
  • Summary:

    The INVICTUS project aims at delivering innovative authoring tools for the creation of a new generation of high-fidelity avatars (numerical representations of real humans) and the integration of these avatars in interactive and non-interactive narratives (movies, games, AR+VR immersive productions). The project proposes to (i) exploit the full potential of recent volumetric motion capture technologies that consist in capturing simultaneously the appearance (shape, texture, material) and motion of actors using simple RGB cameras to create volumetric avatars, and (ii) rely on these technologies to design narratives using novel collaborative VR authoring tools.

    INVICTUS proposes the design of three innovative authoring tools:

    a tool to perform high-resolution volumetric captures of both appearance and motion of characters and that will enable their exploitation in both high-end off-line productions (film quality) and real-time rendering productions. This will ease high-fidelity content creation and reduce costs through less manual labor.

    a tool to perform edits on high-fidelity volumetric appearances and motions, such as transferring shapes between characters, performing stylization of appearance, adapting and transferring motions. This will reduce manual labor and improve fidelity.

    a story authoring tool that will build on VR interactive technologies to plunge storytellers in virtual representations of their stories to edit decors, layouts and animated characters, improving productivity and creativity.

    By demonstrating and communicating on how these technologies can be immediately exploited in both traditional media (films/animation) and novel media (VR + AR) narratives, the INVICTUS project will open opportunities in the EU market for more compelling, immersive and personalized visual experiences, at the crossroads of film and game entertainment. Website

9.3.3 ANR

ANR PRC CAPACITIES

Participants: Charles Pontonnier [contact], Georges Dumont, Diane Haering, Claire Livet.

CAPACITIES is a 42-month ANR JCJC project (2020-2024)

This project is led by Christophe Sauret, from INI/CERAH. The project objective is to build a series of biomechanical indices characterizing the biomechanical difficulty for a wide range of urban environmental situations. These indices will rely on different biomechanical parameters such as proximity to joint limits, forces applied on the handrims, mechanical work, muscle and articular stresses, etc. The definition of a more comprehensive index, called Comprehensive BioMechanical (CBM) cost, including several of the previous indices, will also be a challenging objective. The results of this project would then be used in the first place in VALMOBILE application to assist MWC users in selecting optimal route in Valenciennes agglomeration (project founded by the French National Agency for Urban Renewal and the North Department of France). The MimeTIC team is involved on the musculoskeletal simulation issues and the biomechanical costs definition. The funding for the team is about 80kE.

ANR JCJC Per2

Participants: Ludovic Hoyet [contact], Benjamin Niay, Anne-Hélène Olivier, Richard Kulpa, Franck Multon.

Per2 is a 42-month ANR JCJC project (2018-2022) entitled Perception-based Human Motion Personalisation (Budget: 280kE;Website)

The objective of this project is to focus on how viewers perceive motion variations to automatically produce natural motion personalisation accounting for inter-individual variations. In short, our goal is to automate the creation of motion variations to represent given individuals according to their own characteristics, and to produce natural variations that are perceived and identified as such by users. Challenges addressed in this project consist in (i) understanding and quantifying what makes motions of individuals perceptually different, (ii) synthesising motion variations based on these identified relevant perceptual features, according to given individual characteristics, and (iii) leveraging even further the synthesis of motion variations and to explore their creation for interactive large-scale scenarios where both performance and realism are critical.

This work is performed in collaboration with Julien Pettré from Rainbow team.

ANR JCJC 3D MOVE

Participants: Anne-Hélène Olivier.

3D MOVE is a 42-month ANR JCJC project (2019-2024) entitled Learning to synthesize 3D dynamic human motion (Budget: 303kE; Coordinator Stefanie Wuhrer Inria Grenoble)

The main objective of this project is to automatically compute very high quality generative models from a dense raw database of motion sequences for faces and faces and body movements. The scientific challenges involve the alignment of the motion representation as well as the learning of generative models. The main hypotheses are that there is a strong correlation between the morphology of the person and his/her movement and that an approach on learning techniques would allow to model this correlation. This new generation of models would increase the realism of the generated virtual humans.

ANR JCJC DeepHD

Participants: Adnane Boukhayma [contact].

DeepHD is a 48-month ANR JCJC project (2022-2026) entitled Deep Human Digitization from Minimal Input (Budget: 315kE)

The goal of this project is to develop a new generation of deep learning based methods that can create accurate digital 3D replicas of people from minimal input such as a single color image, captured with a simple consumer grade camera. These replicas can take the form of an explicit colored 3D reconstruction of the subject such as a mesh, or an implicit function encoding their 3D shape and appearance, that can be subsequently tessellated into an explicit 3D model or rendered from novel view-points. While 3D reconstruction from such minimal input has long been considered an inherently ill posed problem, the recent surge in deep learning made this possible through learning strong statistical priors of the target shapes, which proved successful in many recent work especially for shape class specific reconstruction problems. However, existing deep learning based solutions still suffer from computationally and memory cumbersome data representations, heavy dependence on expensive full 3D supervision and weak generalization especially for input images in the wild of people in extreme poses and complex clothing garments. We expect our model to improve on these aspects of the state of the art through a set of methodological contributions that we introduce in the following sections, and that we organise in 3 scientific challenges. The implicit and explicit 3D representations recovered by our model can power a myriad of human-centered applications ranging from the democratization of 3D people scanning, human avatarization for entertainment and virtual reality content, to enabling human-machine interactions. In particular, the application we are most excited about is the prospect of a novel minimalist immersive video chat technology, that only requires a standard video stream to allow the user to view their interlocutor interactively from any viewpoint, thanks to real time performances that could be achieved with our proposed efficient fully differentiable deep learning model inference. We believe this motivation is exceptionally timely in these Covid-19 pandemic times, as it could improve our remote communication experience through digital devices.

ANR HoBiS

Participants: Franck Multon [contact], Armel Cretual, Georges Dumont, Charles Pontonnier, Anthony Sorel, Benjamin Gamblin, Nils Hareng.

Hobis is a 42-month ANR collaborative (PRCI) project (2018-2022) entitled Hominin BipedalismS: Explo- ration of bipedal gaits in Hominins thanks to Specimen-Specific Functional Morphology. HoBis is leaded by the "Museum Nationale d’Histoires Naturelles" (MNHN), with CNRS/LAAS, and Antwerpen University (Belgium), with a total of 541KE budget (140KE for MimeTIC). HoBiS (Hominin BipedalismS) is a pluridisciplinary research project, fundamental in nature and centred on palaeoanthropological questions related to habitual bipedalism, one of the most striking features of the human lineage. Recent discoveries (up to 7 My) highlight an unexpected diversity of loco- motor anatomies in Hominins that lead palaeoanthropologists to hypothesize that habitual bipedal locomotion took distinct shapes through our phylogenetic history. In early Hominins, this diversity could reveal a high degree of locomotor plasticity which favoured their evolutionary success in the changing environments of the late Miocene and Pliocene. Furthermore, one can hypothesize based on biomechanical theory that differences in gait characteristics, even slight, have impacted the energy balance of hominin species and thus their evolutionary success. However, given the fragmented nature of fossil specimens , previous morphometric and anatomo-functional approaches developed by biologists and palaeoanthropologists, do not allow the assessment of the biomechanical and energetic impacts of such subtle morphological differences, and the manners in which hominin species walked still remains unknown. To tackle this problem, HoBiS proposes as main objective a totally new specimen- specific approach in evolutionary anthropology named Specimen-Specific Functional Morphology: inferring plausible complete locomotor anatomies based on fossil remains, to link these reconstructed anatomies and corresponding musculoskeletal models (MSM) with plausible gaits using simulations. Both sub-objectives will make use of an extensive comparative anatomical and gait biomechanical data bases (challenges). To this end, we will integrate anatomical and functional studies, tools for anatomical modelling, optimization and simulation rooted in informatics, biomechanics, and robotics, to build an in-silico decision-support system (DSS). This DSS will provide biomechanical simulations and energetic estimations of the most plausible bipedal gaits for a variety of hominin species based on available remains, from partial to well-preserved specimens. To achieve this main objective, the project will address the following sub-objectives and challenges

MimeTIC is Leader of WP3 "Biomechanical simulation", aiming at predicting plausible bipedal locomotion based on paleoanthropological heuristics and a given MSM.

PIA PPR Sport REVEA

Participants: Richard Kulpa [contact], Benoit Bideau, Franck Multon.

The REVEA project proposes a new generation of innovative and complementary training methods and tools to increase the number of medals at the Paris 2024 Olympic Games, using virtual reality. Indeed, the latter offers standardisation, reproducibility and control features that: 1) Densify and vary training for very high performance without increasing the associated physical loads, and by reducing the risk of impact and/or high intensity exercises ; 2) offer injured athletes the opportunity to continue training during their recovery period, or for all athletes during periods of confinement as experienced with Covid-19 ; 3) provide objective and quantified assessment of athlete performance and progress; and 4) provide a wide range of training that allows for better retention of learning and adaptability of athletes. Virtual reality offers a range of stimuli that go beyond the limits of reality, such as facing an opponent with extraordinary abilities or seeing an action that has not yet been mastered. The objective of REVEA is therefore to meet the needs of three federations by exploiting the unique properties of virtual reality to improve the motor performance of athletes through the optimisation of the underlying perceptual-motor and cognitive-motor processes. The French Gymnastics Federation wishes to optimise the movements of its gymnasts by observing their own motor production to avoid further increasing the load of physical training. The French Boxing Federation wishes to improve the perceptual-motor anticipation capacities of boxers in opposition situations while reducing the impact and therefore the risk of injury. The French Athletics Federation wishes to improve the perceptual-motor anticipation capacities of athletes in cooperative situations (4x100m relay) without running at high intensity. It is performed by a multidisciplinary consortium composed of University Rennes 2 (and Inria), University of Reims Champagne-Ardenne, Aix-Marseille University, Paris-Saclay University and INSEP.

PIA PPR Sport BEST Tennis

Participants: Benoit Bideau [contact].

BEST-TENNIS aims to optimize the performance of the service and return of service of the French Tennis Federation players (able-bodied and wheelchair) in the form of a systemic approach, capitalizing on biomechanical, clinical and cognitive data made available to coaches and athletes through dedicated tools. With its nine events at the Olympic and Paralympic Games, tennis is a sport with a high medal potential.

BEST-TENNIS is funded by the PIA3 PPR "Sport Haute Performance" call. This national project is leaded by researchers in MimeTIC.

PIA PPR Sport Neptune

Participants: Nicolas Bideau [contact], Benoit Bideau, Guillaume Nicolas.

Swimming is a sport with a high medal potential at the Olympic and Paralympic Games. Winning can be decided in 1/100s: every detail of the performance must be evaluated with precision. This is the ambition of the NePTUNE project, with the support of the French Swimming Federation (FFN) and the Handisport Federation (FFH).

To meet the needs of these sports federations, the NePTUNE project focuses on three areas of work, in order to develop innovative methods and tools for coaches to monitor swimmers. A more advanced version on human movement and energetics as well as performance optimization will also be implemented, for more elaborate scientific measurements and research.

The first axis concerns the automatic tracking and race management strategies of swimmers in competition and in training race simulations, to support the performance of medallists, detect swimmers' talents and analyze the competition. Few swimming federations around the world are involved in this type of procedure, unlike the FFN, which is innovative with its semi-automatic tracking system. However, this system needs to be improved in order to offer a fully automatic and more accurate solution.

The second axis is interested in the study of motor coordinations, propulsion and energetics to understand how the transition of behavior takes place and how the frequency/amplitude ratio as well as the underwater part of the race can be optimized. Trainers need miniature and portable sensors (such as inertial units) that automatically and quickly provide key points of swimming technique in order to maximize effectiveness, efficiency and economy.

The third axis focuses on aquatic resistances and suction effect because high performance and economy are not only due to efficient propulsion but also to the minimization of passive and active resistances.

MimeTIC is partner of this PIA3 PPR Sport Haute Performance project, leaded by Université de Rouen Normandie.

PIA EUR DIGISPORT

Participants: Richard Kulpa [contact], Benoit Bideau.

DIGISPORT (Digital Sport Sciences) offers a comprehensive, hybrid graduate school encompassing the specialties in both sport and digital sciences. It provides excellence in research and teaching by serving both traditional students and industry professionals, as well as offering formal collaborations with regional research centers. DIGISPORT helps advance the use and efficacy of digital technologies in sports and exercise, impacting all stakeholders from the users, educators, clinical practitioners, managers, and actors in the socioeconomic world. From the master's to the doctoral level, the program aims to offer students in initial and continuing training an opportunity to build a study strategy suited to their professional goals and to the labor market. Students build their own learning path through a modular teaching offer, based on pedagogical innovation, hybridization of student populations and interdisciplinary projects. The high-level technological platforms are great training tools. DIGISPORT will host renowned international researchers for its teaching programs. The Rennes ecosystem is particularly well suited to host the DIGISPORT Graduate School, both in research and education and training. It incorporates world-class research units in the field of sport (top 300 in the Shanghai ranking), digital (top 100 in the Reuters ranking of the most innovative universities in Europe and top 300 medical technologies in the Shanghai ranking) but also electronics (top 200 for the Shanghai telecommunications ranking) and human and social sciences. The research units involved in DIGISPORT are affiliated with CNRS joint labs (IRISA, IETR, IRMAR, CREST), Inria teams, Grandes Ecoles network (ENS Rennes, INSA Rennes, CentraleSupelec, ENSAI) and Université de Rennes 1 and Université Rennes 2. Rennes is also a proven socioeconomic incubator with a large network of companies organized around the Images et Réseaux cluster, French Tech-Le Poool and prominent sport institutions (CROS, Campus Sport Bretagne).

9.3.4 Défi Avatar

Participants: Jean Basset, Ludovic Hoyet [contact], Maé Mavromatis, Franck Multon.

This project aims at design avatars (i.e., the user’s representation in virtual environments) that are better embodied, more interactive and more social, through improving all the pipeline related to avatars, from acquisition and simulation, to designing novel interaction paradigms and multi-sensory feedback. It involves 6 Inria teams (GraphDeco, Hybrid, Loki, MimeTIC, Morpheo, Potioc), Prof. Mel Slater (Uni. Barcelona), and 2 industrial partners (InterDigitak and Faurecia).

Website

9.3.5 Défi Ys.AI

Participants: Franck Multon [contact], Ludovic Hoyet, Adnane Boukhayma, Tangui Marchand Guerniou.

The metaverse became a buzzword following recent announcements about massive investments on those new universes. While it is not a new concept, second life and a few other past attempts opened the door to this new way to communicate. It can be seen as the future of the social and professional immersive communication for the emergent AI-based e-society, and it is based on a large number of technologies (AR/VR/XR, blockchain/NFT, AI, CG, low latency networks, interactions, coding, rendering...).

Interdigital foresees high potential in the development of metaverse technologies and associated representation formats. However, the panel of technologies to be addressed is too large and some selection has to be done. One of those, is the representation of the humans within those mixed real and virtual environments and how they interact between each other and with the scene (inc. the objects). The project goal is thus to address this topic with a special focus on the representation formats for digital avatars and their behavior in a digital and responsive environment. It includes the study of deep learning and, more generally, artificial intelligence technologies applied to interactions and animation of avatars and associated representation format(s). The main challenge is to solve the uncanny valley effect to provide users with a natural and lifelike social interaction between real and virtual actors, leading to full engagement in those future metaverse experiences.

9.3.6 ADT PyToM

Participants: Charles Pontonnier [contact], Laurent Guillo, Georges Dumont, Salomé Ribault.

This project (2021-2023), funded by inria, aims at developing a Python version of our musculoskeletal library called CusToM and currently developed in Matlab. The project is also developing additional motion data entries (vision, depth cameras) in the library to enhance the usability of the analysis tools.

9.4 Regional initiatives

9.4.1 KIMEA Cloud Project

Participants: Franck Multon [contact], Adnane Boukhayma, Shubhendu Jena.

The Region Bretagne with “Image & Réseaux” national clusters have funded the project “Kimea Cloud” (Sept 2020-June 2022). This project leaded by Moovency (start-up company created in 2018 based on a PhD from MimeTIC) aims at proposing a new system to assess painful movement at work using ligth devices, such as a simple camera. Quortex (www.quortex.io/) company provides its expertise in delivering video content on Internet to make the assessment be remote and efficient for the future customers. MimeTIC provides a robust 3D posture tracking system based on the video stream.

9.4.2 EXOSCARNE Project

Participants: Charles Pontonnier [contact], Simon Kirchhofer.

This project (2020-2023), funded by the Brittany region and endorsed by both competitive pools Valorial and EMC2, aim at designing, prototyping and commercialing a wrist exoskeleton able to help industrial butchers in their cutting and deboning tasks. It is a partnership between a R&D company called Lab4i, the mimetic team and the industrial butchery cooperl. Our role in the consortium is the development of a virtual prototyping tool based on musculoskeletal modeling to simulate the action of the exoskeleton on the wrist, and to characterize the impact of the real prototype on the action of the user thanks to full scale experimentations involving motion, force and muscle activity sensing. The project funding is about 130kE for the team

9.4.3 Stratégie d'Attractivité Durable (SAD) REACTIVE

Participants: Anne-Hélène Olivier [contact], Ludovic Hoyet, Pierre Raimbaud, Katja Zibrek.

This project (2020-2022) is funded by the Brittany region. The general context of REACTIVE concerns the interactions between users and reactive and expressive virtual humans. A prerequisite for the creation of such virtual humans is to understand how "real" humans interact and react to the behaviors and reactions of others. This is the core of the REACTIVE project. It focuses on the elements of non-verbal communication derived from body movements. The underlying idea is to identify, through an analysis involving the Movement Sciences, invariants within the body in reactions to situations of interaction between several people. Through an approach based on Digital Sciences, we propose to model these invariants to create new animations of reactive and expressive virtual humans. This project was performed in collaboration with Julien Pettré and Claudio Pacchierotti.

10 Dissemination

Participants: Franck Multon, Adnane Boukhayma, Ludovic Hoyet, Nicolas Bideau, Benoit Bideau, Marc Christie, Armel Cretual, Georges Dumont, Diane Haering, Richard Kulpa, Fabrice Lamarche, Guillaume Nicolas, Anne-Hélène Olivier, Charles Pontonnier.

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

General chair, scientific chair
  • Anne-Hélène Olivier and Katja Zibrek: Organization of the Virtual Humans and Crowd in Immersive Environment Workshop in the frame of IEEE VR 2022 Conference
  • Charles Pontonnier: Organization of the first Summer School Sciences2024 in June 2022. Organized at Dinard, France.

10.1.2 Scientific events: selection

Member of the conference program committees
  • Anne-Hélène Olivier: IEEVR 2022, EuroVR 2022, SAP 2022
  • Ludovic Hoyet: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2022, ACM Symposium on Applied Perception 2022, ACM Siggraph Asia 2022 (Technical Papers COI), IEEE International Conference on Artificial Intelligence & Virtual Reality 2022, journées Françaises de l'Informatique Graphique 2022
  • Charles Pontonnier: Member of the scientific comitee of the Congrès de la Société de Biomécanique.
Reviewer
  • Ludovic Hoyet: ACM Siggraph 2022, ACM CHI Conference on Human Factors in Computing Systems 2023, Eurographics 2022
  • Franck Multon: MIG2022, AIVR
  • Anne-Hélène Olivier: CEIG 2022, Eurographics 2022, ISMAR 2022
  • Charles Pontonnier: Congrès de la Société de Biomécanique 2022, IEEE VR 2022, IROS 2022

10.1.3 Journal

Member of the editorial boards
  • Franck Multon is associated editor of the journal Computer Animation and Virtual Worlds (Wiley)
  • Franck Multon is associated editor of the Journal Presence (MIT Press)
  • Franck Multon is guest editor for the MDPI Sensors special issue on "Wearables and Computer Vision for Sports Motion Analysis"
  • Charles Pontonnier is Review Editor for Frontiers in Mechanical Engineering
Reviewer - reviewing activities
  • Ludovic Hoyet: International Journal of Human - Computer Studies
  • Franck Multon: Computer and Graphics, IEEE Robotics and Automation, Computer Animation and Virtual Worlds, Journal of Medical Systems, Sensors, Frontiers in Public Health, Virtual Reality journal, Frontiers in Psychology, International Journal of Human - Computer Studies, Frontiers in Virtual Reality, MDPI APplied Sciences, MDPI Design, IEEE Trans. on Visualization and Computer Graphics, International Journal of Environmental Research and Public Health, International Journal of Industrial Ergonomics, Journal of NeuroEngineering and Rehabilitation, MDPI Bioengineering
  • Anne-Hélène Olivier: Gait and Posture, Human Movement Science, Plos One, IEEE RAS
  • Charles Pontonnier: IEEE Robotics and Automation letters, Journal of Biomechanics, Applied Ergonomics, Applied Sciences, international Journal of Computer Assisted Radiology and Surgery, Helyion

10.1.4 Invited talks

  • Charles Pontonnier: Sciences 2024 seminars, GDR CNRS robotique, SCI-Rennes

10.1.5 Leadership within the scientific community

  • Ludovic Hoyet: International Journal of Human - Computer Studies

10.1.6 Scientific expertise

  • Anne-Hélène Olivier: Expertise for ESR Program,Région Nouvelle Aquitaine and ANR AAPG,Interfaces:digital sciences-humanities and social sciences
  • Charles Pontonnier: ANR AAPG,Swiss National Science Foundation

10.1.7 Research administration

  • Ludovic Hoyet is the coordinator of the Inria Research Challenge Avatar
  • Ludovic Hoyet is the coordinator of the ANR JCJC Per2
  • Franck Multon is responsible for the coordination of national Inria actions in Sports
  • Franck Multon is the scientific representative of Inria in Sciences2024 group and scientific Committee
  • Franck Multon is the scientific representative of Inria in the EUR Digisport steering committe and scientific committee
  • Franck Multon is the co-director of the "Nemo.AI" joint Lab with InterDigital, and the associated Défi Ys.AI
  • Franck Multon is member of the Brittany commission of deontology
  • Armel Crétual is the elected head of the Sports Sciences departement (STAPS) in University Rennes 2
  • Benoit Bideau is the head of the M2S Laboratory
  • Benoit Bideau is vice-president of University Rennes 2, in charge of the valorisation
  • Benoit Bideau is the leader of the EUR DIGISPORT project
  • Charles Pontonnier is the deputy director of the Mechatronics teaching and research department of ENS Rennes
  • Charles Pontonnier is member of the EUR digisport pedagogical comitee
  • Anne-Hélène Olivier is a co-director of the master program in adapted physical activity at the University of Rennes 2
  • Richard Kulpa is the co-leader of the EUR DIGISPORT project
  • Richard Kulpa is the scientific head of the EUR DIGISPORT project
  • Richard Kulpa is the leader of the PPR REVEA project
  • Marc Christie is the Project Coordinator of the H2020 ICT-55 project INVICTUS

10.2 Teaching - Supervision - Juries

  • Master : Franck Multon, co-leader of the IEAP Master (1 and 2) "Ingénierie et Ergonomie de l'Activité Physique", STAPS, University Rennes 2, France
  • Master : Franck Multon, "Santé et Performance au Travail : étude de cas", leader of the module, 30H, Master 1 M2S, University Rennes 2, France
  • Master : Franck Multon, "Analyse Biomécanique de la Performance Motrice", leader of the module, 30H, Master 1 M2S, University Rennes 2, France
  • Master: Charles Pontonnier, leader of the first year of master "Ingénierie des systèmes complexes", mechatronics, Ecole normale supérieur de Rennes, France
  • Master: Charles Pontonnier, Responsible of the internships of students (L3 and M1 "Ingénierie des systèmes complexes"), 15H, Ecole Normale Supérieure de Rennes, France
  • Master: Charles Pontonnier, "Numerical simulation of polyarticulated systems", leader of the module, 22H, M1 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Master: Charles Pontonnier, Research projects, 20H, Ecole Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Responsible of the second year of the master Engineering of complex systems, École Normale Supérieure de Rennes and Rennes 1 University, France
  • Master : Georges Dumont, Mechanical simulation in Virtual reality, 36H, Master Engineering of complex systems and Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Mechanics of deformable systems, 40H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, oral preparation to agregation competitive exam, 20H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Vibrations in Mechanics, 10H, Master, École Normale Supérieure de Rennes, France
  • Master : Georges Dumont, Finite Element method, 12H, Master, École Normale Supérieure de Rennes, France
  • Master : Ludovic Hoyet, Computer Graphics, 12h, Ecole Normale Supérieure de Rennes, France
  • Master : Ludovic Hoyet, Motion for Animation and Robotics, 6h, Université Rennes 1, France
  • Master : Ludovic Hoyet, Motion Analysis and Gesture Recognition, 12h, INSA Rennes, France
  • Master : Anne-Hélène Olivier, co-leader of the APPCM Master (1 and 2) "Activités Physiques et Pathologies Chroniques et Motrices", STAPS, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Biostatstiques", 21H, Master 2 APPCM/IEAP, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Evaluation fonctionnelle des pathologies motrices", 3H Master 2 APPCM, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Maladie neurodégénératives : aspects biomécaniques", 2H Master 1 APPCM, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Biostatstiques", 7H, Master 1 EOPS, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Méthodologie", 10H, Master 1 IEAP/APPCM, University Rennes 2, France
  • Master : Anne-Hélène Olivier, "Contrôle moteur : Boucle perceptivo-motrice", 3H, Master 1IEAP, Université Rennes 2, France
  • Master: Fabrice Lamarche, "Compilation pour l'image numérique", 29h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Synthèse d'images", 12h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Synthèse d'images avancée", 28h, Master 1, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Modélisation Animation Rendu", 36h, Master 2, ISTIC, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Jeux vidéo", 26h, Master 2, ESIR, University of Rennes 1, France
  • Master: Fabrice Lamarche, "Motion for Animation and Robotics", 9h, Master 2 SIF, ISTIC, University of Rennes 1, France.
  • Master : Armel Crétual, "Méthodologie", leader of the module, 20H, Master 1 M2S, University Rennes 2, France
  • Master : Armel Crétual, "Biostatstiques", leader of the module, 30H, Master 2 M2S, University Rennes 2, France
  • Richard Kulpa is the co-leader of the new master "Sciences du Numérique et Sport" at University of Rennes 2
  • Master : Richard Kulpa, "Boucle analyse-modélisation-simulation du mouvement", 27h, leader of the module, Master 2, Université Rennes 2, France
  • Master : Richard Kulpa, "Méthodes numériques d'analyse du geste", 27h, leader of the module, Master 2, Université Rennes 2, France
  • Master : Richard Kulpa, "Cinématique inverse", 3h, leader of the module, Master 2, Université Rennes 2, France
  • Master: Marc Christie, Head of Master 2 Ingénierie Logicielle (45 students)
  • Master: Marc Christie, "Multimedia Mobile", Master 2, leader of the module, 32h (IL) + 32h (Miage), Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Projet Industriel Transverse", Master 2, 32h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Modelistion Animation Rendu", Master 2, 16h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Web Engineering", Master 1, 16h, leader of the module, Computer Science, University of Rennes 1, France
  • Master: Marc Christie, "Advanced Computer Graphics", Master 1, 10h, leader of the module, Computer Science, ENS, France
  • Master: Marc Christie, "Motion for Animation and Robotics", Master 2 SIF, Computer Science, France
  • Licence : Franck Multon, "Ergonomie du poste de travail", Licence STAPS L2 & L3, University Rennes 2, France
  • Licence: Charles Pontonnier, "Lagrangian Mechanics" , leader of the module, 22H, M2 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Licence: Charles Pontonnier, "Serial Robotics", leader of the module, 24H, L3 Mechatronics, Ecole Normale Supérieure de Rennes, France
  • Licence : Anne-Hélène Olivier, "Analyse cinématique du mouvement", 100H , Licence 1, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Anatomie fonctionnelle", 7H , Licence 1, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Effort et efficience", 12H , Licence 2, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Locomotion et handicap", 12H , Licence 3, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique spécifique aux APA", 8H , Licence 3, University Rennes 2, France
  • Licence : Anne-Hélène Olivier, "Biomécanique du viellissement", 12H , Licence 3, University Rennes 2, France
  • Licence: Fabrice Lamarche, "Initiation à l'algorithmique et à la programmation", 56h, License 3, ESIR, University of Rennes 1, France
  • License: Fabrice Lamarche, "Programmation en C++", 46h, License 3, ESIR, University of Rennes 1, France
  • Licence: Fabrice Lamarche, "IMA", 24h, License 3, ENS Rennes, ISTIC, University of Rennes 1, France
  • Licence : Armel Crétual, "Analyse cinématique du mouvement", 100H, Licence 1, University Rennes 2, France
  • Licence : Richard Kulpa, "Biomécanique (dynamique en translation et rotation)", 48h, Licence 2, Université Rennes 2, France
  • Licence : Richard Kulpa, "Méthodes numériques d'analyse du geste", 48h, Licence 3, Université Rennes 2, France
  • Licence : Richard Kulpa, "Statistiques et informatique", 15h, Licence 3, Université Rennes 2, France

10.2.1 Supervision

  • PhD stopped (beginning January 2019, ended April 2022): Nils Hareng, simulation of plausible bipedal locomotion of human and non-human primate, University Rennes 2, Franck Multon, & Bruno Watier (CNRS LAAS in Toulouse)
  • PhD defended (beginning January 2019, defended March 2022): Nicolas Olivier, Adaptive Avatar Customization for Immersive Experience, Cifre InterDigital, Franck Multon, Ferran Arguelaget (Hybrid team), Quentin Avril (InterDigital), Fabien Danieau (InterDigital)
  • PhD in progress (beginning September 2017): Lyse Leclerc, Intérêts dans les activités physiques du rétablissement de la fonction inertielle des membres supérieurs en cas d’amputation ou d’atrophie, Armel Crétual, Diane Haering
  • PhD defended (beginning September 2018, defended June 2022) : Jean Basset, Learning Morphologically Plausible Pose Transfer, Inria, Edmond Boyer (Morpheo Inria Grenoble), Franck Multon
  • PhD in progress (beginning December 2020): Mohamed Younes, Learning and simulating strategies in sports for VR training, University Rennes 1, Franck Multon, Richard Kulpa, Ewa Kijak, Simon Malinowski
  • PhD defended (beginning september 2019, defended july 2022): Claire Livet, Contributions algorithmiques à l’analyse musculo-squelettique : modèles et méthodes, Ecole normale supérieure, Georges Dumont, Charles Pontonnier
  • PhD defended (beginning october 2019, defended november 2022): Louise Demestre, Analyse de critères mécaniques de la performance en plongeon sur tremplin à l’aide d’un modèle d’interaction plongeur plongeoir, Ecole normale supérieure, Georges Dumont, Charles Pontonnier, Nicolas Bideau, Guillaume Nicolas
  • PhD defended (beginning October 2018, defended May 2022): Benjamin Niay, A framework for synthezing personalised human motions from motion capture data and perceptual information, Ludovic Hoyet, Anne-Hélène Olivier, Julien Pettré (Virtus team).
  • PhD in progress (beginning September 2019): Alberto Jovane, Modélisation de mouvements réactifs et comportements non verbaux pour la création d’acteurs digitaux pour la réalité virtuelle, Marc Christie, Ludovic Hoyet, Claudio Pacchierotti (Rainbow team), Julien Pettré (Virtus team).
  • PhD defended (beginning November 2019, defended November 2022): Adèle Colas, Modélisation de comportements collectifs réactifs et expressifs pour la réalité virtuelle, Ludovic Hoyet, Anne-Hélène Olivier, Claudio Pacchierotti (Rainbow team), Julien Pettré (Virtus team).
  • PhD in progress (beginning September 2018): Carole Puil, Impact d’une stimulation plantaire orthétique sur la posture d’individus sains et posturalement déficients au cours de la station debout, et lors de la marche, Armel Crétual, Anne-Hélène Olivier
  • PhD in progress (beginning september 2019): Annabelle Limballe, Anticipation dans les sports de combat : la réalité virtuelle comme solution innovante d’entraînement, Richard Kulpa & Simon Bennett & Benoit Bideau
  • PhD in progress (beginning September 2019): Alexandre Vu, Evaluation de l’influence des feedbacks sur la capacité d’apprentissage dans le cadre d’interactions complexes entre joueurs et influence de ces feedbacks en fonction de l’activité sportive, Richard Kulpa & Benoit Bideau & Anthony Sorel
  • PhD in progress (beginning September 2019): William Mocaer, Réseaux de Neurones à Convolution Spatio-Temporelle pour l'analyse et la reconnaissance précoce d’actions et de gestes, Eric Anquetil & Richard Kulpa
  • PhD in progress (beginning September 2020): Pauline Morin, Adaptation des méthodes prédiction des efforts d’interaction pour l’analyse biomécanique du mouvement en milieu écologique, Ecole normale supérieure de Rennes, direction Georges Dumont et Charles Pontonnier
  • PhD in progress (beginning June 2020): Lucas Mourot, Learning-Based Human Character Animation Synthesis for Content Production, Pierre Hellier (InterDigital), Ludovic Hoyet, François Le Clerc (InterDigital).
  • PhD in progress (beginning September 2020): Agathe Bilhaut, Stratégies perceptivo-motrices durant la locomotion des patients douloureux chroniques : nouvelles méthodes d’analyse et de suivi, Armel Crétual, Anne-Hélène Olivier, Mathieu Ménard (Institut Ostéopathie Rennes, M2S)
  • PhD in progress (beginning October 2020): Emilie Leblong, Prise en compte des interactions sociales dans un simulateur de conduite de fauteuil roulant électrique en réalité virtuelle : favoriser l’apprentissage pour une mobilité inclusive, Anne-Hélène Olivier, Marie Babel (Rainbow team)
  • PhD in progress (beginning November 2020): Thomas Chatagnon, Micro-to-macro energy-based interaction models for dense crowds behavioral simulations, Ecole normale supérieure de Rennes, Ludovic Hoyet, Anne-Hélène Olivier,Charles Pontonnier, Julien Pettré (Virtus team).
  • PhD in progress (beginning November 2020): Vicenzo Abichequer-Sangalli, Humains virtuels expressifs et réactifs pour la réalité virtuelle, Marc Christie, Ludovic Hoyet, Carol O'Sullivan (TCD, Ireland), Julien Pettré (virtus team).
  • PhD in progress (beginning November 2020): Tairan Yin, Création de scènes peuplées dynamiques pour la réalité virtuelle, Marc Christie, Marie-Paule Cani (LIX), Ludovic Hoyet, Julien Pettré (Virtus team).
  • PhD in progress (beginning October 2020): Qian Li, Neural novel view synthesis of dynamic people from monocular videos, Adnane Boukhayma, Franck Multon.
  • PhD in progress (beginning June 2022): Shubhendu Jena, Combining implicit and explicit representations for modeling 3D Shape and appearance, Adnane Boukhayma, Franck Multon.
  • PhD in progress (beginning November 2022): Sony Saint-Auret, Virtual Collaborative « Jeu de Paume », Ronan Gaugne, Valérie Gouranton, Franck Multon, Richard Kulpa.
  • PhD in progress (beginning October 2021): Maé Mavromatis, Towards “Avatar-Friendly” Characterization of Virtual Reality Interaction Methods, Ferran Argelaguet (Hybrid team), Ludovic Hoyet, Anatole Lécuyer (Hybrid team).
  • PhD in progress (beginning October 2021): Rebecca Crolan, Prediction of low back load during gymnastics landings for the prevention and follow-up of athlete injuries, Charles Pontonnier, Diane Haering, Matthieu Ménard (M2S Lab).
  • PhD in progress (beginning November 2021): Rim Rekik, Learning and evaluating 3D human motion synthesis, Anne-Hélène Olivier, Stefanie Wuhrer (Morpheo team).
  • PhD in progress (beginning Jan. 2022): Yulia Patotskaya, Appealing Character Design for Embodied Virtual Reality, Ludovic Hoyet, Julien Pettré (Virtus team), Katja Zibrek.
  • PhD in progress (beginning Jan. 2022): Hasnaa Ouadoudi Belabzioui, Force-based criterion for in-situ analysis of physical activity at work: application to load carrying, Charles Pontonnier Franck Multon, Georges Dumont.
  • PhD in progress (beginning Oct. 2022): Alexis Jensen, Dense Crowd Simulation by physics modeling, Julien Pettré (Virtus Team), Charles Pontonnier.
  • PhD in progress (beginning Oct. 2022): Etienne Ricard, Musculoskeletal modeling of the human-exoskeleton system: evaluation and monitoring of exoskeleton familiarization indicators, Charles Pontonnier, Chris Hayot (INRS), Kevin Desbrosses (INRS).

10.2.2 Juries

  • PhD defense: Univ. Rennes 1, Benjamin Niay, "Evaluation and use of biomechanical parameters for virtual humans walking animations" may 2022, Franck Multon, president
  • HDR defense: INSA Rennes, Valérie Gouranton, "Modèles et Outils pour la Production d’Applications de RX, Focus sur le Patrimoine Culturel", March 2022, Franck Multon, President
  • PhD defense: Université de Grenoble-Alpes de Victor Romero, « Un environnement virtuel immersif, interactif et collaboratif pour les revues de conception basées sur les modèles », 13 décembre 2022, Georges Dumont, Reviewer
  • PhD defense: Université Lyon 1 Claude Bernard de Thomas Bonis. « Simulation directe prédictive de marches pathologiques ». 13 octobre 2022, Georges Dumont, Reviewer
  • PhD defense: Thèse de doctorat de Aix-Marseille Université de Mathieu Caumes. «Rôle de la relation de force longueur sur la coordination musculaire et la production de force avec la main : Approche par modélisation biomécanique ». 28 avril 2022, Georges Dumont, Reviewer
  • PhD defense: Université Lyon 1 Claude Bernard de Vincent Gibeaux. « Motion simulator for accessibility ». december 2022, Charles Pontonnier, Reviewer
  • PhD defense: Ecole Centrale de Nantes de Vamsi Krishna Gruda. « Contributions to utilize a Cobot as intermittent contact haptic interfaces in virtual reality». august 2022, Charles Pontonnier, Reviewer

11 Scientific production

11.1 Major publications

11.2 Publications of the year

International journals

International peer-reviewed conferences

  • 36 inproceedingsA.Andreas Aristidou, A.Alan Chalmers, Y.Yiorgos Chrysanthou, C.C Loscos, F.F Multon, J.J Parkins, B.Bhuvan Sarupuri and E.Efstathios Stavrakis. Safeguarding our Dance Cultural Heritage.Eurographics 2022 - 43nd Annual Conference of the European Association for Computer GraphicsEurographics 2022 - TutorialsReims, FranceApril 2022, 1-6
  • 37 inproceedingsJ.Jean Basset, B.Badr Ouannas, L.Ludovic Hoyet, F.Franck Multon and S.Stefanie Wuhrer. Impact of Self-Contacts on Perceived Pose Equivalences.MIG 2022 - ACM SIGGRAPH Conference on Motion, Interaction and GamesGuanajuato Mexico, MexicoACMNovember 2022, 1-10
  • 38 inproceedingsP.Pierre Raimbaud, A.Alberto Jovane, K.Katja Zibrek, C.Claudio Pacchierotti, M.Marc Christie, L.Ludovic Hoyet, J.Julien Pettre and A.-H.Anne-Hélène Olivier. The Stare-in-the-Crowd Effect in Virtual Reality.VR 2022 - IEEE Conference on Virtual Reality and 3D User InterfacesChristchurch, New ZealandIEEEMarch 2022, 281-290
  • 39 inproceedingsJ.João Regateiro and E.Edmond Boyer. Temporal Shape Transfer Network for 3D Human Motion.3DV 2022 - International Conference on 3D VisionPrague / Hybrid, Czech RepublicSeptember 2022, 1-9
  • 40 inproceedingsX.Xiaofang Wang, A.Adnane Boukhayma, S.Stéphanie Prévost, E.Eric Desjardin, C.Céline Loscos and F.Franck Multon. Coupling dense point cloud correspondence and template model fitting for 3D human pose and shape reconstruction from a single depth image.International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET)Limassol, Cyprus2022, 1-8

Conferences without proceedings

  • 41 inproceedingsW.William Mocaër, E.Eric Anquetil and R.Richard Kulpa. Early Recognition of Untrimmed Handwritten Gestures with Spatio-Temporal 3D CNN.ICPR 2022 - International Conference on Pattern RecognitionMontréal, CanadaAugust 2022, 1636-1642
  • 42 inproceedingsW.William Mocaër, E.Eric Anquetil and R.Richard Kulpa. Réseau Convolutif Spatio-Temporel 3D pour la Reconnaissance Précoce de Gestes Manuscrits Non-Segmentés.RFIAP 2022 - Congrès Reconnaissance des Formes, Image, Apprentissage et PerceptionVannes, FranceJuly 2022, 1-9
  • 43 inproceedingsA.Amine Ouasfi and A.Adnane Boukhayma. Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space.ECCV 2022 - European Conference on Computer VisionLNCS-13692Computer Vision – ECCV 2022Tel Aviv, IsraelSpringer Nature SwitzerlandJuly 2022, 561-578

Scientific book chapters

  • 44 inbookS.Stéphane Salvan, F.Florence Maqueda and C.Charles Pontonnier. Un exemple d'archéologie du geste martial: les guerriers de Paestum.Armes et GuerriersBAR international seriesS3078BAR PublishingApril 2022, 1-12

Doctoral dissertations and habilitation theses

  • 45 thesisJ.Jean Basset. Morphologically Plausible Deformation Transfer.Université Grenoble Alpes [2020-....]June 2022
  • 46 thesisL.Louise Demestre. Analysis of mechanical criteria of springboard diving performance using a diver-diving board interaction model.École normale supérieure de RennesNovember 2022
  • 47 thesisL.Ludovic Hoyet. Towards Perception-based Character Animation.Université Rennes 1May 2022
  • 48 thesisC.Claire Livet. Algorithmic contributions to musculoskeletal analysis ˸ models and methods.École normale supérieure de RennesJuly 2022
  • 49 thesisN.Nicolas Olivier. Adaptive avatar customization for immersive experiences.Université Rennes 1March 2022

Reports & preprints

Other scientific publications