MimeTIC is a multidisciplinary team whose aim is to better understand and model human activity in order to simulate realistic autonomous virtual humans: realistic behaviors, realistic motions and realistic interactions with other characters and users. It leads to modeling the complexity of a human body, as well as of his environment where he can pick-up information and he can act on it. A specific focus is dedicated to human physical activity and sports as it raises the highest constraints and the highest complexity when addressing these problems. Thus, MimeTIC is composed of experts in computer science whose research interests are computer animation, behavioral simulation, motion simulation, crowds and interaction between real and virtual humans. MimeTIC is also composed of experts in sports science, motion analysis, motion sensing, biomechanics and motion control. Hence, the scientific foundations of MimeTIC are motion sciences (biomechanics, motion control, perception-action coupling, motion analysis), computational geometry (modeling of the 3D environment, motion planning, path planning) and design of protocols in immersive environments (use of virtual reality facilities to analyze human activity).
Thanks to these skills, we wish to reach the following objectives: to make virtual humans behave, move and interact in a natural manner in order to increase immersion and to improve knowledge on human motion control. In real situations (see Figure ), people have to deal with their physiological, biomechanical and neurophysiological capabilities in order to reach a complex goal. Hence MimeTIC addresses the problem of modeling the anatomical, biomechanical and physiological properties of human beings. Moreover these characters have to deal with their environment. Firstly they have to perceive this environment and pick-up relevant information. MimeTIC thus addresses the problem of modeling the environment including its geometry and associated semantic information. Secondly, they have to act on this environment to reach their goals. It leads to cognitive processes, motion planning, joint coordination and force production in order to act on this environment.
In order to reach the above objectives, MimeTIC has to address three main challenges:
dealing with the intrinsic complexity of human beings, especially when addressing the problem of interactions between people for which it is impossible to predict and model all the possible states of the system,
making the different components of human activity control (such as the biomechanical and physical, the reactive, cognitive, rational and social layers) interact while each of them is modeled with completely different states and time sampling,
and being able to measure human activity while balancing between ecological and controllable protocols, and to be able to extract relevant information in wide databases of information.
Contrary to many classical approaches in computer simulation, which mostly propose simulation without trying to understand how real people do, the team promotes a coupling between human activity analysis and synthesis, as shown in Figure .
In this research path, improving knowledge on human activity enables us to highlight fundamental assumptions about natural control of human activities. These contributions can be promoted in e.g. biomechanics, motion sciences, neurosciences. According to these assumptions we propose new algorithms for controlling autonomous virtual humans. The virtual humans can perceive their environment and decide of the most natural action to reach a given goal. This work is promoted in computer animation, virtual reality and has some applications in robotics through collaborations. Once autonomous virtual humans have the ability to act as real humans would in the same situation, it is possible to make them interact with others, i.e., autonomous characters (for crowds or group simulations) as well as real users. The key idea here is to analyze to what extent the assumptions proposed at the first stage lead to natural interactions with real users. This process enables the validation of both our assumptions and our models.
Among all the problems and challenges described above, MimeTIC focuses on the following domains of research:
motion sensing which is a key issue to extract information from raw motion capture systems and thus to propose assumptions on how people control their activity,
human activity & virtual reality, which is explored through sports application in MimeTIC. This domain enables the design of new methods for analyzing the perception-action coupling in human activity, and to validate whether the autonomous characters lead to natural interactions with users,
interactions in small and large groups of individuals, to understand and model interactions with lot of individual variability such as in crowds,
virtual storytelling which enables us to design and simulate complex scenarios involving several humans who have to satisfy numerous complex constraints (such as adapting to the real-time environment in order to play an imposed scenario), and to design the coupling with the camera scenario to provide the user with a real cinematographic experience,
biomechanics which is essential to offer autonomous virtual humans who can react to physical constraints in order to reach high-level goals, such as maintaining balance in dynamic situations or selecting a natural motor behavior among the whole theoretical solution space for a given task,
and autonomous characters which is a transversal domain that can reuse the results of all the other domains to make these heterogeneous assumptions and models provide the character with natural behaviors and autonomy.
Human motion control is a highly complex phenomenon that involves several layered systems, as shown in Figure . Each layer of this controller is responsible for dealing with perceptual stimuli in order to decide the actions that should be applied to the human body and his environment. Due to the intrinsic complexity of the information (internal representation of the body and mental state, external representation of the environment) used to perform this task, it is almost impossible to model all the possible states of the system. Even for simple problems, there generally exists an infinity of solutions. For example, from the biomechanical point of view, there are much more actuators (i.e. muscles) than degrees of freedom leading to an infinity of muscle activation patterns for a unique joint rotation. From the reactive point of view there exists an infinity of paths to avoid a given obstacle in navigation tasks. At each layer, the key problem is to understand how people select one solution among these infinite state spaces. Several scientific domains have addressed this problem with specific points of view, such as physiology, biomechanics, neurosciences and psychology.
In biomechanics and physiology, researchers have proposed hypotheses based on accurate joint modeling (to identify the real anatomical rotational axes), energy minimization, force and torques minimization, comfort maximization (i.e. avoiding joint limits), and physiological limitations in muscle force production. All these constraints have been used in optimal controllers to simulate natural motions. The main problem is thus to define how these constraints are composed altogether such as searching the weights used to linearly combine these criteria in order to generate a natural motion. Musculoskeletal models are stereotyped examples for which there exists an infinity of muscle activation patterns, especially when dealing with antagonist muscles. An unresolved problem is to define how to use the above criteria to retrieve the actual activation patterns, while optimization approaches still leads to unrealistic ones. It is still an open problem that will require multidisciplinary skills including computer simulation, constraint solving, biomechanics, optimal control, physiology and neuroscience.
In neuroscience, researchers have proposed other theories, such as coordination patterns between joints driven by simplifications of the variables used to control the motion. The key idea is to assume that instead of controlling all the degrees of freedom, people control higher level variables which correspond to combinations of joint angles. In walking, data reduction techniques such as Principal Component Analysis have shown that lower-limb joint angles are generally projected on a unique plane whose angle in the state space is associated with energy expenditure. Although knowledge exists for specific motions, such as locomotion or grasping, this type of approach is still difficult to generalize. The key problem is that many variables are coupled and it is very difficult to objectively study the behavior of a unique variable in various motor tasks. Computer simulation is a promising method to evaluate such type of assumptions as it enables to accurately control all the variables and to check if it leads to natural movements.
Neuroscience also addresses the problem of coupling perception and action by providing control laws based on visual cues (or any other senses), such as determining how the optical flow is used to control direction in navigation tasks, while dealing with collision avoidance or interception. Coupling of the control variables is enhanced in this case as the state of the body is enriched by the large amount of external information that the subject can use. Virtual environments inhabited with autonomous characters whose behavior is driven by motion control assumptions is a promising approach to solve this problem. For example, an interesting problem in this field is navigation in an environment inhabited with other people. Typically, avoiding static obstacles together with other people displacing into the environment is a combinatory problem that strongly relies on the coupling between perception and action.
One of the main objectives of MimeTIC is to enhance knowledge on human motion control by developing innovative experiments based on computer simulation and immersive environments. To this end, designing experimental protocols is a key point and some of the researchers in MimeTIC have developed this skill in biomechanics and perception-action coupling. Associating these researchers to experts in virtual human simulation, computational geometry and constraints solving enable us to contribute to enhance fundamental knowledge in human motion control.
Understanding interactions between humans is challenging because it addresses many complex phenomena including perception, decision-making, cognition and social behaviors. Moreover, all these phenomena are difficult to isolate in real situations, and it is therefore highly complex to understand their individual influence on these human interactions. It is then necessary to find an alternative solution that can standardize the experiments and that allows the modification of only one parameter at a time. Video was first used since the displayed experiment is perfectly repeatable and cut-offs (stop the video at a specific time before its end) allow having temporal information. Nevertheless, the absence of adapted viewpoint and stereoscopic vision does not provide depth information that are very meaningful. Moreover, during video recording session, the real human is acting in front of a camera and not of an opponent. The interaction is then not a real interaction between humans.
Virtual Reality (VR) systems allow full standardization of the experimental situations and the complete control of the virtual environment. It is then possible to modify only one parameter at a time and to observe its influence on the perception of the immersed subject. VR can then be used to understand what information is picked up to make a decision. Moreover, cut-offs can also be used to obtain temporal information about when information is picked up. When the subject can moreover react as in a real situation, his movement (captured in real time) provides information about his reactions to the modified parameter. Not only is the perception studied, but the complete perception-action loop. Perception and action are indeed coupled and influence each other as suggested by Gibson in 1979.
Finally, VR allows the validation of virtual human models. Some models are indeed based on the interaction between the virtual character and the other humans, such as a walking model. In that case, there are two ways to validate it. First, they can be compared to real data (e.g. real trajectories of pedestrians). But such data are not always available and are difficult to get. The alternative solution is then to use VR. The validation of the realism of the model is then done by immersing a real subject in a virtual environment in which a virtual character is controlled by the model. Its evaluation is then deduced from how the immersed subject reacts when interacting with the model and how realistic he feels the virtual character is.
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. It aims at studying algorithms for combinatorial, topological and metric problems concerning sets of points in Euclidian spaces. Combinatorial computational geometry focuses on three main problem classes: static problems, geometric query problems and dynamic problems.
In static problems, some inputs are given and the corresponding outputs need to be constructed or found. Such problems include linear programming, Delaunay triangulations, and Euclidian shortest paths for instance. In geometric query problems, commonly known as geometric search problems, the input consists of two parts: the search space part and the query part, which varies over the problem instances. The search space typically needs to be preprocessed, in a way that multiple queries can be answered efficiently. Some typical problems are range searching, point location in a portioned space, or nearest neighbor queries. In dynamic problems, the goal is to find an efficient algorithm for finding a solution repeatedly after each incremental modification of the input data (addition, deletion or motion of input geometric elements). Algorithms for problems of this type typically involve dynamic data structures. Both of previous problem types can be converted into a dynamic problem, for instance, maintaining a Delaunay triangulation between moving points.
In this context, distance geometry relies solely on distances, instead of points and lines, as in classical geometry. Various applications lead to the definition of problems that can be formulated as a distance geometry, including sensor network localization, robot coordination, the identification of molecular conformations, or as in the context of MimeTIC relations between objects in virtual scenes (e.g., distances between body segments, agents, or cameras). In recent years, scientific research has been oriented to the assumptions allowing for discretizing the search space of a given distance geometry problem. The discretization (which is exact in some situations) allows to conceive ad-hoc and efficient algorithms, and for enumerating the entire solution set of a given instance.
The MimeTIC team works on problems such as crowd simulation, spatial analysis, path and motion planning in static and dynamic environments, camera planning with visibility constraints for instance. The core of those problems, by nature, relies on problems and techniques belonging to computational geometry. Proposed models pay attention to algorithms complexity to be compatible with performance constraints imposed by interactive applications.
Autonomous characters are becoming more and more popular as they are used in an increasing number of application domains. In the field of special effects, virtual characters are used to replace secondary actors and generate highly populated scenes that would be hard and costly to produce with real actors. In video games and virtual storytelling, autonomous characters play the role of actors that are driven by a scenario. Their autonomy allows them to react to unpredictable user interactions and adapt their behavior accordingly. In the field of simulation, autonomous characters are used to simulate the behavior of humans in different kind of situations. They enable to study new situations and their possible outcomes.
One of the main challenges in the field of autonomous characters is to provide a unified architecture for the modeling of their behavior. This architecture includes perception, action and decisional parts. This decisional part needs to mix different kinds of models, acting at different time scale and working with different nature of data, ranging from numerical (motion control, reactive behaviors) to symbolic (goal oriented behaviors, reasoning about actions and changes).
In the MimeTIC team, we focus on autonomous virtual humans. Our problem is not to reproduce the human intelligence but to propose an architecture making it possible to model credible behaviors of anthropomorphic virtual actors evolving/moving in real time in virtual worlds. The latter can represent particular situations studied by psychologists of the behavior or to correspond to an imaginary universe described by a scenario writer. The proposed architecture should mimic all the human intellectual and physical functions.
Biomechanics is obviously a very large domain. This large set can be divided regarding to the scale at which the analysis is performed going from microscopic evaluation of biological tissues’ mechanical properties to macroscopic analysis and modeling of whole body motion. Our topics in the domain of biomechanics mainly lie within this last scope. In order to obtain a better understanding of human motion, MimeTIC addresses three main situations: everyday motions of a lambda subject, locomotion of pathological subjects and sports gestures.
In the first situation, MimeTIC is interested in studying how subjects maintain their balance in highly dynamic conditions. Until now, balance have nearly always been considered in static or quasi-static conditions. The knowledge of much more dynamic cases still has to be improved. Our approach has demonstrated that, first of all, the question of the parameter that will allow to do this is still open. We have also largely contributed to gaining a better understanding of collision avoidance between pedestrians. This topic includes the research of the parameters that are interactively controlled and the study of each one’s role within this interaction.
The second situation focuses on locomotion of pathological subjects. When patients cannot walk efficiently, in particular those suffering from central nervous system affections, it becomes very useful for practitioners to benefit from an objective evaluation of their capacities. To facilitate such evaluations, we have developed two complementary indices, one based on kinematics and the other one on muscle activations. One major point of our research is that such indices are usually only developed for children whereas adults with these affections are much more numerous. Finally, in sports, where gesture can be considered, in some way, as abnormal, the goal is more precisely to understand the determinants of performance. This could then be used to improve training programs or devices. Two different sports have been studied: a) the tennis serve, where the goal was to understand the contribution of each segment of the body on the speed of the ball and b) the influence of the mechanical characteristics of the fin in fin swimming.
After having improved the knowledge of these different gestures a second goal is then to propose modeling solutions that can be used in VR environments for other research topics within MimeTIC. This has been the case, for example, for collision avoidance.
Modeling and simulating the interactions between walkers is a very active, complex and competitive domain, interesting various disciplines such as mathematics, cognitive sciences, physics, computer graphics, etc. Interactions between walkers are by definition at the very core of our society since they represent the basic synergies of our daily life. When walking in the street, we take information about our surrounding environment in order to interact with people, move without collision, alone or in a group, intercept, meet or escape to somebody. Large groups of walkers can be first seen as a complex system: numerous local interactions occur between its elements and result into macroscopic emergent phenomena. Interactions are of various nature (e.g., collision avoidance, following) and are undergoing various factors as well. Physical factors are crucial as a group gathers by definition numerous moving people with a certain level of density. But sociological, cultural and psychological factors are important as well, since people’s behavior is deeply changed from country to country, or depending on the considered situations. On the computational point of view, simulating the movements of large groups of walkers (i.e., crowds) pushes traditional simulation algorithms to their limit. As an element of a crowd is subject to interact with any other element belonging the same crowd, a naïve simulation algorithm has a quadratic complexity. Specific strategies are set to face such a difficulty: level-of-detail techniques enable scaling large crowd simulation and reach real-time solutions.
MimeTIC is an international key contributor in the domain of understanding and simulating interactions between walkers, in particular for virtual crowds. Our approach is specific and based on three axes. First, our modeling approach is based on human movement science: we conduct challenging experiments focusing on the perception as well as on the motion involved in local interactions between walkers both using real and virtual set-ups. Second: we develop high-performance solutions for crowd simulation. Third, we develop solutions for realistic navigation in virtual world to enable interaction with crowds in Virtual Reality.
Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user’s body parts. Whatever the system is, one of the main problems is to be able to automatically recognize and analyze the user’s performance according to poor and noisy signals. Human activity and motion are subject to variability: intra-variability due to space and time variations of a given motion, but also inter-variability due to different styles and anthropometric dimensions. MimeTIC has addressed the above problems in two main directions.
Firstly, we have studied how to recognize and quantify motions performed by a user when using accurate systems such as Vicon (product of Oxford Metrics) or Optitrack (product of Natural Point) motion capture systems. These systems provide large vectors of accurate information. Due to the size of the state vector (all the degrees of freedom) the challenge is to find the compact information (named features) that enables the automatic system to recognize the performance of the user. Whatever the method used, finding these relevant features that are not sensitive to intra-individual and inter-individual variability is a challenge. Some researchers have proposed to manually edit these features (such as a Boolean value stating if the arm is moving forward or backward) so that the expertise of the designer is directly linked with the success ratio. Many proposals for generic features have been proposed, such as using Laban notation which was introduced to encode dancing motions. Other approaches tend to use machine learning to automatically extract these features. However most of the proposed approaches were used to seek a database for motions which properties correspond to the features of the user’s performance (named motion retrieval approaches). This does not ensure the retrieval of the exact performance of the user but a set of motions with similar properties.
Secondly, we wish to find alternatives to the above approach which is based on analyzing accurate and complete knowledge on joint angles and positions. Hence new sensors, such as depth-cameras (Kinect, product of Microsoft) provide us with very noisy joint information but also with the surface of the user. Classical approaches would try to fit a skeleton into the surface in order to compute joint angles which, again, lead to large state vectors. An alternative would be to extract relevant information directly from the raw data, such as the surface provided by depth cameras. The key problem is that the nature of these data may be very different from classical representation of human performance. In MimeTIC, we try to address this problem in specific application domains that require picking specific information, such as gait asymmetry or regularity for clinical analysis of human walking.
Sport is characterized by complex displacements and motions. These motions are dependent on visual information that the athlete can pick up in his environment, including the opponent’s actions. Perception is thus fundamental to the performance. Indeed, a sportive action, as unique, complex and often limited in time, requires a selective gathering of information. This perception is often seen as a prerogative for action, it then takes the role of a passive collector of information. However, as mentioned by Gibson in 1979, the perception-action relationship should not be considered sequential but rather as a coupling: we perceive to act but we must act to perceive. There would thus be laws of coupling between the informational variables available in the environment and the motor responses of a subject. In other words, athletes have the ability to directly perceive the opportunities of action directly from the environment. Whichever school of thought considered, VR offers new perspectives to address these concepts by complementary using real time motion capture of the immersed athlete.
In addition to better understanding sports and interactions between athletes, VR can also be used as a training environment as it can provide complementary tools to coaches. It is indeed possible to add visual or auditory information to better train an athlete. The knowledge found in perceptual experiments can be for example used to highlight the body parts that are important to look at to correctly anticipate the opponent’s action.
Interactive digital storytelling, including novel forms of edutainment and serious games, provides access to social and human themes through stories which can take various forms and contains opportunities for massively enhancing the possibilities of interactive entertainment, computer games and digital applications. It provides chances for redefining the experience of narrative through interactive simulations of computer-generated story worlds and opens many challenging questions at the overlap between computational narratives, autonomous behaviours, interactive control, content generation and authoring tools.
Of particular interest for the MimeTIC research team, virtual storytelling triggers challenging opportunities in providing effective models for enforcing autonomous behaviours for characters in complex 3D environments. Offering both low-level capacities to characters such as perceiving the environments, interacting with the environment and reacting to changes in the topology, on which to build higher-levels such as modelling abstract representations for efficient reasoning, planning paths and activities, modelling cognitive states and behaviours requires the provision of expressive, multi-level and efficient computational models. Furthermore virtual storytelling requires the seamless control of the balance between the autonomy of characters and the unfolding of the story through the narrative discourse. Virtual storytelling also raises challenging questions on the conveyance of a narrative through interactive or automated control of the cinematography (how to stage the characters, the lights and the cameras). For example, estimating visibility of key subjects, or performing motion planning for cameras and lights are central issues for which have not received satisfactory answers in the literature.
The design of workstations nowadays tends to include assessment steps in a Virtual Environment (VE) to evaluate ergonomic features. This approach is more cost-effective and convenient since working directly on the Digital Mock-Up (DMU) in a VE is preferable to constructing a real physical mock-up in a Real Environment (RE). This is substantiated by the fact that a Virtual Reality (VR) set-up can be easily modified, enabling quick adjustments of the workstation design. Indeed, the aim of integrating ergonomics evaluation tools in VEs is to facilitate the design process, enhance the design efficiency, and reduce the costs.
The development of such platforms asks for several improvements in the field of motion analysis and VR. First, interactions have to be as natural as possible to properly mimic the motions performed in real environments. Second, the fidelity of the simulator also needs to be correctly evaluated. Finally, motion analysis tools have to be able to provide in real-time biomechanics quantities usable by ergonomists to analyse and improve the working conditions.
The results of the PhD thesis of Pierre Plantard leaded to the software "Kimea" with a national APP deposit. Faurecia company encouraged us to create a start-up company based on these results. Hence, we get two grants of the SATT "Ouest Valorisation" (total 300K) to transform the thesis prototype into a professionnal solution. The Kimea project has been granted several industrial and innovation prices (see below). A software engineer and an ergonomist have been recruited to create the original team of the future start-up (creation planned beginning of 2018). The software is based on several previously published works , and validated in actual industrial context . Franck Multon will be scientific expert in the future start-up as a co-founder of the company.
Kimea project has been granted by regional and national innovation committees:
regional Pepite Tremplin competition by the Universities Bretagne Loire, October 2017,
national Pepite Tremplin competition (53 projects granted among 700), November 2017,
granted "Projet du futur" if the BPO foundation (10k), 18/10/2017,
Top 500 national startup, "Hello Tomorrow" challenge, 25-27 october 2017.
Asymmetry index for clinical gait analysis based on depth images
Keywords: Motion analysis - Kinect - Clinical analysis
Scientific Description: The system uses depth images delivered by the Microsoft Kinect to retrieve the gait cycles first. To this end it is based on a analyzing the knees trajectories instead of the feet to obtain more robust gait event detection. Based on these cycles, the system computes a mean gait cycle model to decrease the effect of noise of the system. Asymmetry is then computed at each frame of the gait cycle as the spatial difference between the left and right parts of the body. This information is computed for each frame of the cycle.
Functional Description: AsymGait is a software package that works with Microsoft Kinect data, especially depth images, in order to carry-out clinical gait analysis. First it identifies the main gait events using the depth information (footstrike, toe-off) to isolate gait cycles. Then it computes a continuous asymmetry index within the gait cycle. Asymmetry is viewed as a spatial difference between the two sides of the body.
Participants: Edouard Auvinet and Franck Multon
Contact: Franck Multon
Keyword: 3D animation
Functional Description: The software, developed as an API, provides a mean to automatically compute a collection of viewpoints over one or two specified geometric entities, in a given 3D scene, at a given time. These viewpoints satisfy classical cinematographic framing conventions and guidelines including different shot scales (from extreme long shot to extreme close-up), different shot angles (internal, external, parallel, apex), and different screen compositions (thirds,fifths, symmetric of di-symmetric). The viewpoints allow to cover the range of possible framings for the specified entities. The computation of such viewpoints relies on a database of framings that are dynamically adapted to the 3D scene by using a manifold parametric representation and guarantee the visibility of the specified entities. The set of viewpoints is also automatically annotated with cinematographic tags such as shot scales, angles, compositions, relative placement of entities, line of interest.
Participants: Christophe Lino, Emmanuel Badier and Marc Christie
Partners: Université d'Udine - Université de Nantes
Contact: Marc Christie
Keywords: Previzualisation - Virtual camera - 3D animation
Functional Description: Directors Lens Motion Builder is a software plugin for Autodesk's Motion Builder animation tool. This plugin features a novel workflow to rapidly prototype cinematographic sequences in a 3D scene, and is dedicated to the 3D animation and movie previsualization industries. The workflow integrates the automated computation of viewpoints (using the Cinematic Viewpoint Generator) to interactively explore different framings of the scene, proposes means to interactively control framings in the image space, and proposes a technique to automatically retarget a camera trajectory from one scene to another while enforcing visual properties. The tool also proposes to edit the cinematographic sequence and export the animation. The software can be linked to different virtual camera systems available on the market.
Participants: Christophe Lino, Emmanuel Badier and Marc Christie
Partner: Université de Rennes 1
Contact: Marc Christie
Kinect IMprovement for Egronomics Assessment
Keywords: Biomechanics - Motion analysis - Kinect
Scientific Description: Kimea consists in correcting skeleton data delivered by a Microsoft Kinect in an ergonomics purpose. Kimea is able to manage most of the occlultations that can occur in real working situation, on workstations. To this end, Kimea relies on a database of examples/poses organized as a graph, in order to replace unreliable body segments reconstruction by poses that have already been measured on real subject. The potential pose candidates are used in an optimization framework.
Functional Description: Kimea gets Kinect data as input data (skeleton data) and correct most of measurement errors to carry-out ergonomic assessment at workstation.
Participants: Franck Multon, Hubert Shum and Pierre Plantard
Partner: Faurecia
Contact: Franck Multon
Publications: Usability of corrected Kinect measurement for ergonomic evaluation in constrained environment - Validation of an ergonomic assessment method using Kinect data in real workplace conditions - Ergonomics Measurements using Kinect with a Pose Correction Framework - Filtered Pose Graph for Efficient Kinect Pose Reconstruction - Reliability of Kinect measurements for assessing the movement of operators in ergonomic studies
Keywords: Behavior modeling - Agent - Scheduling
Scientific Description: The software provides the following functionalities:
- A high level XML dialect that is dedicated to the description of agents activities in terms of tasks and sub activities that can be combined with different kind of operators: sequential, without order, interlaced. This dialect also enables the description of time and location constraints associated to tasks.
- An XML dialect that enables the description of agent's personal characteristics.
- An informed graph describes the topology of the environment as well as the locations where tasks can be performed. A bridge between TopoPlan and Populate has also been designed. It provides an automatic analysis of an informed 3D environment that is used to generate an informed graph compatible with Populate.
- The generation of a valid task schedule based on the previously mentioned descriptions.
With a good configuration of agents characteristics (based on statistics), we demonstrated that tasks schedules produced by Populate are representative of human ones. In conjunction with TopoPlan, it has been used to populate a district of Paris as well as imaginary cities with several thousands of pedestrians navigating in real time.
Functional Description: Populate is a toolkit dedicated to task scheduling under time and space constraints in the field of behavioral animation. It is currently used to populate virtual cities with pedestrian performing different kind of activities implying travels between different locations. However the generic aspect of the algorithm and underlying representations enable its use in a wide range of applications that need to link activity, time and space. The main scheduling algorithm relies on the following inputs: an informed environment description, an activity an agent needs to perform and individual characteristics of this agent. The algorithm produces a valid task schedule compatible with time and spatial constraints imposed by the activity description and the environment. In this task schedule, time intervals relating to travel and task fulfillment are identified and locations where tasks should be performed are automatically selected.
Participants: Carl-Johan Jorgensen and Fabrice Lamarche
Contact: Fabrice Lamarche
Keywords: 3D animation - Interactive Scenarios
Functional Description: The Theater is a software framework to develop interactive scenarios in virtual 3D environements. The framework provides means to author and orchestrate 3D character behaviors and simulate them in real-time. The tools provides a basis to build a range of 3D applications, from simple simulations with reactive behaviors, to complex storytelling applications including narrative mechanisms such as flashbacks.
Participant: Marc Christie
Contact: Marc Christie
Customizable Toolbox for Musculoskeletal simulation
Keywords: Biomechanics - Dynamic Analysis - Kinematics - Simulation - Mechanical multi-body systems
Scientific Description: The present toolbox aims at performing a motion analysis thanks to an inverse dynamics method.
Before performing motion analysis steps, a musculoskeletal model is generated. Its consists of, first, generating the desire anthropometric model thanks to models libraries. The generated model is then kinematical calibrated by using data of a motion capture. The inverse kinematics step, the inverse dynamics step and the muscle forces estimation step are then successively performed from motion capture and external forces data. Two folders and one script are available on the toolbox root. The Main script collects all the different functions of the motion analysis pipeline. The Functions folder contains all functions used in the toolbox. It is necessary to add this folder and all the subfolders to the Matlab path. The Problems folder is used to contain the different study. The user has to create one subfolder for each new study. Once a new musculoskeletal model is used, a new study is necessary. Different files will be automaticaly generated and saved in this folder. All files located on its root are related to the model and are valuable whatever the motion considered. A new folder will be added for each new motion capture. All files located on a folder are only related to this considered motion.
Functional Description: Inverse kinematics Inverse dynamics Muscle forces estimation External forces prediction
Participants: Antoine Muller, Charles Pontonnier and Georges Dumont
Contact: Antoine Muller
Keywords: Virtual reality - Motion capture - Movement analysis
Functional Description: MotionGraphVR is a tool enabling users to automatically create motion graphs in Unity. It is particularly targeting Virtual Reality applications, where with the development of Head Mounted Displays users are now unable to see their real body unless they use expensive motion capture system, or animation techniques (e.g., Inverse Kinematics) which suffer from a lack of visual realism. To lever these limitations, MotionGraphVR automatically builds a graph of human motions from a set of examples captured on a real actor, and identify which motion path is the graph is closest to the user's actions. Additionally, this plugin also provides analysing tools to allow developers of VR applications to visualise similarities between movements to use in their applications before seamlessly connecting them in Motion Graphs.
Participants: Tiffany Luong, Ludovic Hoyet and Fernando Argelaguet Sanz
Contact: Ludovic Hoyet
With the two platforms of virtual reality, Immersia and Immermove, grouped under the name Immerstar, the team has access to high level scientific facilities. This equipment benefits the research teams of the center and has allowed them to extend their local, national and international collaborations. The Immerstar platform is granted by a Inria CPER funding for 2015-2019 that enables important evolutions of the equipment. In 2016, the first technical evolutions have been decided and, in 2017, these evolutions have been implemented. On one side, for Immermove, the addition of a third face to the immersive space, and the extension of the Vicon tracking system have been realized. And, on the second side, for Immersia, the installation of WQXGA laser projectors with augmented global resolution, of a new tracking system with higher frequency and of new computers for simulation and image generation.
In 2017, MimeTIC has maintained his activity in motion analysis, modelling and simulation. In motion analysis, we focused our efforts on three major points: 1) being able to simplify the calibration and simulation of customized musculoskeletal models of the subjects, 2) explore how visual perception act on collision avoidance in pedestrian locomotion with an extension to group behavior, and 3) adapt accurate analysis in real condition (industrial or clinical contexts) where measurement inaccuracies and easy-to-use constraints make it difficult to directly apply methods used in laboratories.
From a long time, MimeTIC has been promoting the idea of using Virtual Reality to train human performance. On the one hand, it leads to an efficient tradeoff between high control and naturalness of the situation. On the other hand, it raises several fundamental questions about the automatic evaluation of the performance of the user, and the transfer of the skills trained in VR to real practice. In 2017, we explored these two questions by 1) developping new automatic methods for users' performance recognition and evaluation, and 2) biofidelity of mass manipulation in VR using haptic interfaces.
In virtual cinematography, we applied the analysis/synthesis approach to extract and simulate film styles and narration. We also extended our previously defiend Toric Space for camera placement to drone toric space to control a group of drones filming the action of an actor to ensure the coverture of cinematographic distinct viewpoints.
The PhD thesis of Antoine Muller defended on june, the 26 aimed at democratizing the use of musculoskeletal analysis for a wide range of users. The work proposed contributions enabling better performances of such analyses and preserving accuracy, as well as contributions enabling an easy subject-specific model calibration. Firstly, in order to control the whole analysis process, the work is developed in a global approach of all the analysis steps: kinematics, dynamics and muscle forces estimation. For all of these steps, quick analysis methods have been proposed. Particularly, a quick muscle force sharing problem resolution method has been proposed, based on interpolated data. Moreover, a complete calibration process , based on classical motion analysis tools available in a biomechanical lab has been developed, based on motion capture and force platform data.
Diane Haering, Inria Post-doctoral fellow at MimeTIC works on the the determination of maximal torque enveloppes of the elbow. These results could have a great potential of application to quantify the articular load during work tasks , to help calibrating muscle parameters into musculoskeletal simulations . The method has been integrated in a more global subject specific calibration method . This method could also be used to better represent musculoskeletal models as in .
Ana-Lucia Cruz-Ruiz was a PhD student from november 2013 to december 2016. The goal of this thesis was to define and evaluate muscle-based controllers for motion control. This PhD was related to the ANR Entracte project. She developed an original control approach to reduce the redudancy of the musculoskeletal system. A low-dimensional representation of control mechanisms in throwing motions from a variety of subjects and target distances was proposed. The control representation stands at the kinematic level in task and joint spaces respectively, and at the muscle activation level using the theory of muscle synergies. Representative features were chosen and extracted using factorization and clustering techniques from the muscle data leading to better represent mechanisms hidden behind such dynamical motions, and could offer a promising control representation for synthesizing motions with muscle-driven characters .
Interaction between people, and especially local interaction between walkers, is a main research topic of MimeTIC. We propose experimental approaches using both real and virtual environments to study both perception and action aspects of the interaction. Our efforts to validate the virtual reality platform to study interactions was acknowledged by a publication in IEEE TVCG 2017 and was presented in IEEE VR 2017 conference . Using the VR platform, we investigated the nature of visual information that is used for a collision free interaction. We aimed to manipulate the nature of visual information in two forms, global and local information appearances. The obstacle was presented with one of five virtual appearances, associated to global motion cues (i.e., a cylinder or a sphere), or local motion cues (i.e., only the legs or the trunk). A full body virtual walker, showing both local and global motion cues, used as a reference condition. The final crossing distance was affected by the global motion appearances, however, appearance had no qualitative effect on motion adaptations. These findings contribute towards further understanding what information people use when interacting with others. This work was published in TVCG 2017 and presented as a poster in the ACAPS 2017 Conference. This year, we also developed new experiments in our immersive platform. We designed a study to investigate the effect of gaze interception during collision avoidance between two walkers. In such a situation, mutual gaze can be considered as a form of nonverbal communication. Additionally, gaze is believed to detail future path intentions and to be part of the nonverbal negotiation to achieve avoidance collaboratively. We considered an avoidance task between a real subject and a virtual human character and studied the influence of the character's gaze direction on the avoidance behaviour of the participant. Virtual reality provided us with an accurate control of the situation: seventeen participants were immersed in a virtual environment, instructed to navigate across a virtual space using a joystick and to avoid a virtual character that would appear from either side. The character would either gaze or not towards the participant. Further, the character would either perform or not a reciprocal adaptation of its trajectory to avoid a potential collision with the participant. The findings of this paper were that during an orthogonal collision avoidance task, gaze behaviour did not influence the collision avoidance behaviour of the participants. Further, the addition of reciprocal collision avoidance with gaze did not modify the collision behaviour of participants. These results suggest that for the duration of interaction in such a task, body motion cues were sufficient for coordination and regulation. We discuss the possible exploitation of these results to improve the design of virtual characters for populated virtual environments and to interact with users. These results were presented to the AFRV 2017 conference and submitted to IEEE VR 2018 conference.
We also provide lot of efforts to investigate, in collaboration with Julien Pettré from Inria Lagadic team, the process involved in the selection of interactions within our neighbourood. Considering the complex case of multiple interactions, we performed experiments in real conditions where a participant walked across a room whilst either one (i.e., pairwise) or two (i.e., group) participants crossed the room perpendicularly. By comparing these pairwise and group interactions, we assessed whether a participant avoids two upcoming collisions simultaneously, or as sequential pairwise interactions. Furthermore, in the group trials we varied the relative position of the two participants that crossed the trajectory of the other. This allowed us to change the affordance of passing through or around (i.e., its ‘pass-ability’). Results showed that in the group trials, participants consistently avoided collision with lower risks of impending collision (as quantified by the future distance of closest approach) in the group compared to the pairwise trials. This implies that a participant – to some extent – interacted simultaneously with two other participants. Furthermore, we analysed in the group trials how the ‘pass-ability’ evolved over time. Results indicated that the affordance of passing through or around was already established early in the interaction. This shows that participants are susceptible to the affordance of passing through a gap between others. We concluded that pedestrians are able to interact with two other walkers simultaneously, rather than treating each interaction in sequence. These results were presented at the ICPA 2017 conference .
Finally, we continue working on the interaction between a walker and a moving robot. This work was performed in collaboration with Philippe Souères and Christian Vassallo (LAAS, Toulouse). The development of Robotics accelerated these recent years, it is clear that robots and humans will share the same environment in a near future. In this context, understanding local interactions between humans and robots during locomotion tasks is important to steer robots among humans in a safe manner. Our work is a first step in this direction. Our goal is to describe how, during locomotion, humans avoid collision with a moving robot. We just published in Gait and Posture our results on collision avoidance between participants and a non-reactive robot (we wanted to avoid the effect of a complex loop by a robot reacting to participants’ motion). Our objective was to determine whether the main characteristics of such interaction preserve the ones previously observed: accurate estimation of collision risk, anticipated and efficient adaptations. We observed that collision avoidance between a human and a robot has similarities with human-human interactions (estimation of collision risk, anticipation) but also leads to major differences . Humans preferentially give way to the robot, even if this choice is not optimal with regard to motion adaptation to avoid the collision. In this new study, we considered the situation where the robot was reactive to the walker's motion. First of all, it results that humans have a good understanding of the robot behavior and their reaction are smoother and faster with respect to the case with a non-collaborative robot. Second, humans adapt similarly to human-human study and the crossing order is respected in almost all cases. These results have strong similarities with the ones observed with two humans crossing each other.
Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user's body parts. Whatever the system is, one of the main challenge is to be able to automatically recognize and analyze the user's performance according to poor and noisy signals. Hence, recognizing and measuring human performance are important scientific challenges especially when using low-cost and noisy motion capture systems. MimeTIC has addressed the above problems in two main application domains. In this section, we detail the ergonomics application of such an approach. Firstly, in ergonomics, we explored the use of low-cost motion capture systems (i.e., a Microsoft Kinect) to measure the 3D pose of a subject in natural environments, such as on a workstation, with many occlusions and inappropriate sensor placements. Predicting the potential accuracy of the measurement for such complex 3D poses and sensor placements is challenging with classical experimental setups. After having evaluated the actual accuracy of the pose reconstruction method delivered by the Kinect, we have identified that occlusions were a very important problem to solve in order to obtain reliable ergonomic assessments in real cluttered environments. To this end, we developed an approach to deal with long occlusions that occur in real manufacturing conditions. This approach is based on a structured database of examples (named filtered pose graph) that enables real-time correction of Kinect skeleton data .
This method has been applied to a complete ergonomic process outputting RULA scores based on the reconstructed and corrected poses. We challenged this method with a reference motion capture system in laboratory conditions . To this end we compared joint angles and RULA scores obtained with our system and a reference Vicon mocap system in various conditions (with and without occlusions). The results show a very good accordance between manually tuned RULA scores given by experts and those computed by the automatic system. These results demonstrate that it could be used in industrial context to support the ergonomists decision-making process.
This year we also extended this work to evaluate if corrected data enabled us to estimate reliable joint torques using inverse dynamics to provide new information to ergonomic assessment . Indeed, joint torques and forces are relevant quantities to estimate the biomechanical constraints of working tasks in ergonomics. However, inverse dynamics requires accurate motion capture data, which are generally not available in real manufacturing plants. Markerless and calibrationless measurement systems based on depth cameras, such as the Microsoft Kinect, are promising means to measure 3D poses in real time, such as using our corrected Kinect approach. Thus, we evaluated the reliability of an inverse dynamics method based on this corrected skeleton data and its potential use to estimate joint torques and forces in such cluttered environments. To this end, we compared the calculated joint torques with those obtained with a reference inverse dynamics method based on an optoelectronic motion capture system. Results show that the Kinect skeleton data enabled the inverse dynamics process to deliver reliable joint torques in occlusion-free (r=0.99 for the left shoulder elevation) and occluded (r=0.91 for the left shoulder elevation) environments. However, differences remain between joint torques estimations. Such reliable joint torques open appealing perspectives for the use of new fatigue or solicitation indexes based on internal efforts measured on site. The study demonstrates that corrected Kinect data could be used to estimate internal joint torques, using an adapted inverse dynamics method. The method could be applied on-site because it can handle some cases with occlusions. The resulting Kinect-based method is easy-to-use, real-time and could assist ergonomists in risk evaluation on site.
This work was partially funded by the Faurecia company through a Cifre convention.
In clinical gait analysis, we proposed a method to overcome the main limitations imposed by the low accuracy of the Kinect measurements in real medical exams. Indeed, inaccuracies in the 3D depth images lead to badly reconstructed poses and inaccurate gait event detection. In the latter case, confusion between the foot and the ground leads to inaccuracies in the foot-strike and toe-off event detection, which are essential information to get in a clinical exam. To tackle this problem we assumed that heel strike events could be indirectly estimated by searching for the extreme values of the distance between the knee joints along the walking longitudinal axis. As Kinect sensor may not accurately locate the knee joint, we used anthropometrical data to select a body point located at a constant height where the knee should be in the reference posture. Compared to previous works using a Kinect, heel strike events and gait cycles are more accurately estimated, which could improve global clinical gait analysis frameworks with such a sensor. Once these events are correctly detected, it is possible to define indexes that enable the clinician to have a rapid state of the quality of the gait. We therefore proposed a new method to assess gait asymmetry based on depth images, to decrease the impact of errors in the Kinect joint tracking system. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. The movement of artificially impaired gaits was recorded using both a Kinect placed in front of the subject and a motion capture system. The proposed longitudinal index distinguished asymmetrical gait, while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect skeleton measurements. This gait asymmetry index measured with a Kinect is low cost, easy to use and is a promising development for clinical gait analysis.
This method has been challenged with other classical approaches to assess gait asymmetry using either cheap Kinect data or Vicon data. We demonstrate the superiority of the approach when using Kinect data for which traditional approaches failed to accurately detect gait asymmetry. It has been validated on healthy subjects who were forced to walk with a 5cm sole placed below each foot alternatively. In 2017 , we compared the results obtained with the famous Constant Relative Phase (CRP) that aims at quantifying within-stride asymmetry index. CRP requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. As our index, the Longitudinal Asymmetry Index (ILong), is obtained using data from a low-cost depth camera (Kinect) (depth images averaged over several gait cycles), rather than derived joint positions or angles, we checked that it could deliver more reliable asymmetry information within gait, compared to CRP. Hence, this study aimed to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. the results show that CRP based on times derivatives of joint angles failed to detect gait asymmetry, when using Kinect data. However, our index, ILong, detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring.
This work has been done in collaboration with the MsKLab from Imperial College London, to design new gait asymmetry indexes that could be used in daily clinical analysis.
Following the previous studies we made on tennis serve, we were able to evaluate the link between performance and risk of injuries. To go further, we made new experiments on top-level young French players (between 12 up to 18 years old) to quantify the motor technical errors made (kinematics) and the impact on the risk of injury (dynamics). This experiments are part of a collaboration with the FFT (French Tennis Federation). We recently validated that the Waiter's serve implies higher risk of injuries . It is a movement that was know by the coaches as not productive and risky but it was never validated.
Since September 2016, Antonio Mucherino has a half-time Inria detachment in the MimeTIC team, in order to collaborate on exploring distance geometry-based problems in representing and editing human motion.
In this context, an extension of a distance geometry approach to dynamical problems was proposed in , and we co-supervised Antonin Bernardin for his Master thesis, which focused on applying such extended approach for retargeting human motions. In character animation, it is often the case that motions created or captured on a specific morphology need to be reused on characters having a different morphology. However, specific relationships such as body contacts or spatial relationships between body parts are often lost during this process, and existing approaches typically try to determine automatically which body part relationships should be preserved in such animation. Instead, we proposed a novel frame-based approach to motion retargeting , which relies on a normalized representation of all the body joints distances to encompass all the relationships existing in a given motion. In particular, we proposed to abstract postures by computing all the inter-joint distances of each animation frame and to represent them by Euclidean Distance Matrices (EDMs). Such EDMs present the benefits of capturing all the subtle relationships between body parts, while being adaptable through a normalization process to create a morphology independent distance-based representation. Finally, they can also be used to efficiently compute retargeted joint positions best satisfying newly imposed distances. We demonstrated that normalized EDMs can be efficiently applied to a different skeletal morphology by using a dynamical distance geometry approach, and presented results on a selection of motions and skeletal morphologies.
In parallel, in collaboration with national (LIX, École Polytechnique, Palaiseau) and international partners, we have been working for improving the performances of existing algorithms for distance geometry, independently from the considered application. In , we analyzed the main causes for the approach to fail to provide accurate solutions in cases where interval distances are provided (instead of unique distance values), and we proposed some possible strategies to detect such situations. In , we presented a linear optimization problem for a common pre-processing step in distance geometry: the one of identifying a special vertex order allowing to discretize the solution search space.
Action recognition based on human skeleton structure represents nowadays a prospering research field. This is mainly due to the recent advances in terms of capture technologies and skeleton extraction algorithms. In this context, we observed that 3D skeleton-based actions share several properties with handwritten symbols since they both result from a human performance. We accordingly hypothesize that the action recognition problem can take advantage of trial and error approaches already carried out on handwritten patterns. Therefore, inspired by one of the most efficient and compact handwriting feature-set, we proposed a skeleton descriptor referred to as Handwriting-Inspired Features. First of all, joint trajectories are preprocessed in order to handle the variability among actor's morphologies. Then we extract the HIF3D features from the processed joint locations according to a time partitioning scheme so as to additionally encode the temporal information over the sequence. Finally, we used Support Vector Machine (SVM) for classification. Evaluations conducted on two challenging datasets, namely HDM05 and UTKinect, testify the soundness of our approach as the obtained results outperform the state-of-the-art algorithms that rely on skeleton data .
This work has been carried-out in collaboration with the IRISA Intuidoc team, with Yacine Boulahia who is a co-supervised PhD student with Eric Anquetil.
Automatically evaluating and quantifying the performance of a player is a complex task since the important motion features to analyze depend on the type of performed action. But above all, this complexity is due to the variability of morphologies and styles of both the experts who perform the reference motions and the novices. Only based on a database of experts’ motions and no additional knowledge, we propose an innovative 2-level DTW (Dynamic Time Warping) approach to temporally and spatially align the motions and extract the imperfections of the novice’s performance for each joints . We applied our method on tennis serve and karate katas .
Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user's body parts. Whatever the system is, one of the main challenge is to be able to automatically recognize and analyze the user's performance according to poor and noisy signals. Hence, recognizing and measuring human performance are important scientific challenges especially when using low-cost and noisy motion capture systems. MimeTIC has addressed the above problems in two main application domains. In this section, we detail the ergonomics application of such an approach. Firstly, in ergonomics, we explored the use of low-cost motion capture systems (i.e., a Microsoft Kinecte of geometrical and mechanical characteristics of the haptic device. Uncertainties on friction coefficients within the model are tuned thanks to an experimental protocol enabling a subjective comparison between real and virtual manipulations of a low mass object. The compensation of friction on the first and second axes of the haptic interface showed significant improvement of both realism and perceived load .
We have designed and made available an open database of annotated film clips together with an analysis of elements of film style related to how the shots are composed, how the transitions are performed between shots and how the shots are sequenced to compose a film unit . The purpose is to initiate a shared repository pertaining to elements of film style which can be used by computer scientists and film analysts alike. Though both research communities rely strongly on the availability of such information to foster their findings, current databases are either limited to low-level features (such as shots lengths, color and luminance information), contain noisy data, or are not available to the communities. The data and analysis we provide open exciting perspectives as to how computational approaches can rely more thoroughly on information and knowledge extracted from existing movies, and also provide a better understanding of how elements of style are arranged to construct a consistent message.
We have introduced Film Editing Patterns (FEP), a language to formalize film editing practices and stylistic choices found in movies. FEP constructs are constraints expressed over one or more shots from a movie sequence that characterize changes in cinematographic visual properties such as shot size, region, angle of on-screen actors.
We have designed the elements of the FEP language, then introduced its usage in annotated film data, and described how it can support users in the creative design of film sequences in 3D. More specifically: (i) we proposed the design of a tool to craft edited filmic sequences from 3D animated scenes that uses FEPs to support the user in selecting camera framings and editing choices that follow certain best practices used in cinema; (ii) we conducted an evaluation of the application with professional and non-professional filmmakers. The evaluation suggested that users generally appreciate the idea of FEP, and that it can effectively help novice and medium experienced users in crafting film sequences with little training and satisfying results.
We have designed a set of high-level tools for filming dynamic targets with quadrotor drones. To this end, we proposed a specific camera parameter space (the Drone Toric space) together with interactive on-screen viewpoint manipulators compatible with the physical constraints of a drone. We then designed a real-time path planning approach in dynamic environments which ensures both cinematographic properties in viewpoints along the path and ensures the feasibility of the path by a quadrotor drone. We finally have demonstrated how the Drone Toric Space can be combined with our path planning technique to coordinate positions and motions of multiple drones around dynamic targets to ensure the coverture of cinematographic distinct viewpoints. The proposed research prototypes have been evaluation by an experienced drone pilot and filmmaker, as well as by non-experts users. Not only does the tool demonstrate it's benefit in rehearsing complex camera moves for the film and documentary industries, but it demonstrates it's usability for everyday recording of aethetic camera motions.
This contract has started in February 2017 and will last in October 2018. In M2S, it involves two permanent members of MimeTic team, Armel Crétual and Franck Multon, and two engineers, Antoine Marin (18 months grant) and Brice Bouvier (10 months grant).
This project is a collaboration between BA Healthcare and M2S lab. It aims at developing a robotics platform to allow physicians to start gait rehabilitation as soon as possible, even before patients are able to maintain upright posture alone. The usual way to perform such rehab sessions is to make the patient walk on a treadmill benefiting from a harness to prevent patient from falling. The two main limits of this approach are that:
only straightforward at constant speed gaits are feasible whereas falling risks are much higher when modifying speed or turning
walking on a treadmill when motor abilities are very affected can be challenging and can generate strong apprehension.
In a previous project, Robo-K, ended in september 2016, BA Healthcare has developed a first prototype of a mobile robot which strongly modified the approach: the harness is mobile and follows the patient displacement. In this way, the patient walks on the ground at his/her desired speed and the physician can include curved trajectories in the rehab process.
The main novelty of Robo-KII project is to implement a biofeeedback system onto the robotics platform to reinforce rehab sessions. Closely working with physicians from two PMR services, CHU Rennes and Kerpape center, we intend to define the optimal feedback to be given to the patients and to measure the corresponding gait parameters thanks to depth cameras mounted on the robot.
Bilateral contract with Technicolor on empowering drones with cinematographics knowledge. Participants: Philippe Guillotel, Julien Fleureau, Quentin Galvane. Amount 25kE. Duration 24 months.
SATT "Ouest valorisation" grant for the maturation of the Kimea software and projet (Franck Multon and Pierre Plantard). 12 months of three full-time people 300K€. Creation of the start-up company planned beginning of 2018.
SATT "Ouest valorisation" grant for the maturation of the Populate software (Fabrice Lamarche). One full-time engineer (2017-2018).
Cineviz is a 3-year ANR LabCom project (2016-2019). Amount: 300kE. Parnters: SolidAnim, UR1.
The project is a bilateral collaboration with the SolidAnim company. The objective is to jointly progress on the design and implementation of novel tools for the preproduction in the film industry. The project will address the challenges related to (i) proposing expressive framing tools, (ii) integrating the technical aspects of shooting (how to place the cameras, lights, green sets) directly at the design stage), and (iii) novel interaction metaphors for designing and controlling the staging of lights in preproduction, using an example-based approach.
The ANR project ENTRACTE is a collaboration between the Gepetto team in LAAS, Toulouse (head of the project) and the Inria/MimeTIC team. The project started in November 2013 and ended in August 2017. The purpose of the ENTRACTE project is to address the action planning problem, crucial for robots as well as for virtual human avatars, in analyzing human motion at a biomechanical level and in defining from this analysis bio-inspired motor control laws and bio-inspired paradigms for action planning. The project is launched since november 2013 and Ana Lucia Cruz Ruiz, who has been recruited as a PhD student since this date, defended her thesis on muscle-based control based on synergies last year.
The Cavaletic collaborative project is leaded by University Bretagne Sud and also involves University Rennes2 (CREAD Lab.). It has been funded by the National IFCE (Institut Français du Cheval et de l'Equitation) in order to develop and evaluate technological assistance in horse riding learning, thanks to a user-centered approach. MimeTIC is involved in measuring expert and non-expert horse riders’ motions in standardized situations in order to develop metrics to measure riders’ performance. It will be used to develop a technological system embedded on users to evaluate their performance and provide them with real-time feedback to correct potential errors.
An exclusive contract has been signed between the M2S laboratory and the French Federation of Tennis for three years. The goal is to perform biomechanical analyses of 3D tennis serves on a population of 40 players of the Pôle France. The objective is to determine the link between injuries and biomechanical constraints on joints and muscles depending on the age and gender of the players. At the end, the goal is to evaluate their load training.
gDGA (generalization of the Distance Geometry and its Applications) is a INS2I/CNRS PEPS project involving local and national partners. Distance geometry can nowadays be seen as a classical problem in operational research, having a wide range of applications. The main aim of this interdisciplinary project is to extend the definition and the range of applicability of distance geometry. In particular, our main interest is on dynamical problems, motivated by a certain number of applications of interest, including interaction motion adaptation, the simulation of crowd behaviours, and the conception of modern recommender systems. The classical application of distance geometry arising in the biological field is also taken into consideration. The necessity of a strong computational power for the considered applications motivates the need of implementing our algorithms in environments capable of exploiting the resources on GPU cards.
The IRMA project is an Imag'In project funded by CNRS which aims at developping innovative methodologies for research in the field of cultural heritage based on the combination of medical imaging technologies and interactive 3D technologies (virtual reality, augmented reality, haptics, additive manufacturing). It relies on close collaborations with the National Institute of Preventive Archaeological Research (Inrap), the Research Center Archaeology, and History Archéosciences (CReAAH UMR 6566) and the company Image ET. The developed tools are intended for cultural heritage professionals such as museums, curators, restorers, and archaeologists. We focus on a large number of archeological artefacts of different nature, and various time periods (Paleolithic, Mesolithic, and Iron Age Medieval) from all over France. We can notably mention the oldest human bones found in Brittany (clavicle Beg Er Vil), a funeral urn from Trebeurden (22), or a Bronze Cauldron from a burial of the Merovingian necropolis "Crassés Saint-Dizier" (51). This project involves a strong collaboration with members of the team Hybrid (Valérie Gouranton, Bruno Arnaldi and Jean-Baptiste Barreau), Théophane Nicolas (Inrap/UMR Trajectoires), Quentin Petit (SED Inria Rennes), and Grégor Marchand (CNRS/UMR CReAAH).
The ADT-Immerstar is driven by the SED and aims at developing new tools and facilities for the scientific community in order to develop demos and use the two immersive rooms in Rennes: Immersia and Immermove. The engineer (Quentin Petit, SED) has the responsibility of homogenizing the software modules and development facilities in each platform, of installing new upgrades and of developping collaborative applications between the two sites.
The Inria PRE projet entitled "Smart sensors and novel motion representation breakthrough for human performance analysis" aims at designing a new description for human motion in order to automatically capture, measure and transfer the intrinsic constraints of human motion. Current approached consisted in manually editing the constraints associated with a motion, to use classical skeleton representation with joint angles based on direct or indirect measurements, and then perform inverse kinematics to fulfill these constraints. We aim at designing a new representation to simplify this process pipeline and make it automatic, together with relevant motion sensors that could provide enough information to automatically extract these intrinsic constraints. To this end, this project has been jointly proposed with the Inria CAIRN team, which develops sensors based on joint orientations and distances between sensors. We aim at extending this type of device to measure new types of information that would help to simplify the above mentionned pipeline. A postdoc arrived in November 2016 to jointly work with CAIRN. We also involved Hubert Shum from Northumbria University to link this project with the long-term collaboration with Dr. Shum about this type of problem. ype of problems.
Title: Fostering Research on Models for Storytelling Applications
International Partner (Institution - Laboratory - Researcher):
NCCU (Taiwan) - Intelligent Media Lab (IML) - Tsai-Yen Li
Start year: 2016
See also: http://
Interactive Storytelling is a new media which allows users to alter the content and outcome of narratives through role-playing and specific actions. With the quality, the availability and reasonable costs of display technologies and 3D interaction devices on one side, and the accessibility of 3D content creation tools on the other, this media is taking a significant share in entertainment (as demonstrated by the success of cinematographic games such as Heavy Rain or Beyond: two souls). These advances push us to re-think the way narratives are traditionally structured, explore new interactive modalities and provide new interactive cinematographic experiences. As a sequel of the first associate team FORMOSA 1, we propose to address new challenges pertained to interactive storytelling such as the use of temporal structures in narratives, interaction modalities and their impact in terms of immersion, and the adaptation of cinematographic real data to 3D environments. To achieve these objectives, the associate team will rely on the complementary skills of its partners and on the co-supervision of students.
Title: REal data against crowd SImulation AlgorithMS
International Partner (Institution - Laboratory - Researcher):
University of North Carolina at Chapel Hill (United States) - GAMMA Research Group (GAMMA) - Ming LIN
Start year: 2015
See also: http://
RE-SIMS aims at gathering the best international research teams working on crowd simulation to allow significant progresses on the level of realism achieved by crowd simulators. To this end, RE-SIMS aims at improving methods for capturing crowd motion data that describe real crowd behaviors, as well as by improving data assimilation techniques.
In this renewal, RE-SIMS extends the previous SIMS partnership and follows a multidiciplinary direction.
Dr. Edouard Auvinet, Imperial College London, UK (collaboration with Franck Multon)
Dr. Huber Shum, Northumbria University, Newcastle, UK (collaboration with Franck Multon and Ludovic Hoyet, with joint papers and supervision)
Dr. Rachel McDonnell, Trinity College Dublin, Ireland (on-going collaboration with Ludovic Hoyet, including a 6-month internship from one of her PhD student in Rennes)
Prof. Carol O’Sullivan, Trinity College Dublin, Ireland (on-going collaboration with Ludovic Hoyet)
Prof. Carlile Lavor, UNICAMP, Campinas, Sao Paulo, Brazil (collaboration with Antonio Mucherino)
Dr. Douglas S. Gonçalves, Federal University of Santa Catarina, Florianṕolis, Brazil (collaboration with Antonio Mucherino)
Jung-Hsin Lin, Academia Sinica, Taipei, Taiwan (collaboration with Antonio Mucherino)
Victoria Interrante, Professor, Department of Computer Science and Engineering, University of Minnesota USA, December 8th, 2017
Michael Cinelli, Associate Professor, Kinesiology and Physical Education, Wilfrid Laurier University, Canada, June 2017
Emma Carrigan, Trinity College Dublin, Ireland (PhD supervisor: Dr. Rachel McDonnell), 6-month internship in collaboration with Technicolor (Quentin Avril), Jan. to June 2017.
Anne-Hélène Olivier, Workshop VHCIE 2017, IEEE VR 2017, Los Angles, United-States, March 2017
Anne-Hélène Olivier, Workshop Interactions, Rennes, June 2017
Ludovic Hoyet, co-organiser of the French Computer Graphics - Virtual Reality days, October 2017, Rennes, France
Marc Christie, steering committee of Motion in Games 2017
Ludovic Hoyet: co-chair of French Computer Graphics days 2017, October 2017, Rennes, France
Antonio Mucherino: Workshop on Computational Optimization (WCO17),
in the framework of the ”Federated Conference on Computer Science and Information Systems” (FedCSIS17),
co-chair with Stefka Fidanova and Daniela Zaharie
Web: https://
Ludovic Hoyet, ACM Motion in Games MIG 2017, Barcelona, Spain, Nov. 2017
Ludovic Hoyet, ACM Symposium on Applied Perception 2017, Cottbus, Germany, Sept. 2017
Ludovic Hoyet, International Conference on Computer Graphics Theory and Applications 2018, Maderia, Portugal, Jan. 2018
Anne-Hélène Olivier, ACM Motion in Games MIG 2017, Barcelona, Spain, October 2017
Anne-Hélène Olivier, IEEE VR 2018 TVCG Conference paper, Reutlingen, Germany, March 2018
Franck Multon, ACM Motion in Games MIG 2017, Barcelona, Spain, Nov. 2017
Franck Multon, Computer Animation and Social Agents CASA 2017, Seoul, Korea, May 2017
Franck Multon, IEEE Conference on Automatic Face and Gesture Recognition FG'2017, Washington DC, US, June 2017
Franck Multon, Affective Computing and Intelligent Interaction ACII 2017, San Antonio, United States, October 2017
Richard Kulpa, International Conference on Computer Graphics Theory and Applications (GRAPP), 2017
Marc Christie, Motion In Games (MIG) 2017
Ludovic Hoyet, ACM Siggraph, Los Angeles, United-States, July 2017
Ludovic Hoyet, IEEE VR 2018, Reutlingen, Germany, March 2018
Ludovic Hoyet, Pacific Graphics, Taipei, Taiwan, October 2017
Anne-Hélène Olivier, IEEE VR 2018 TVCG Conference paper, Reutlingen, Germany, March 2018
Anne-Hélène Olivier, ACM Motion in Games MIG 2017, Barcelona, Spain, October 2017
Anne-Hélène Olivier, ACM VRST 2017, Gothenburg, Sweden, November 2017
Richard Kulpa, IEEE International Conference on Automatic Face and Gesture Recognition, 2018
Richard Kulpa, Siggraph CHI, 2018
Richard Kulpa, GRAPP, 2018
Marc Christie Eurographics 2017
Marc Christie ACM Siggraph CHI 2017
Marc Christie Siggraph 2017
Marc Christie IEEE VR 2017
Franck Multon IEEE VR 2017
Franck Multon ACM VRST 2017
Franck Multon ACM MIG 2017
Franck Multon ACM VRST 2017
Franck Multon, Presence, MIT Press
Franck Multon, Computer Animation and Virtual Worlds CAVW, John Wiley
Antonio Mucherino, Guest Editor of Optimization Letters, Springer
Armel Crétual, Editorial board of Journal of Electromyography and Kinesiology
Marc Christie, Associate Editor of the Visual Computer
Franck Multon, Medical & Biological Engineering & Computing, Computer and Graphics, IEEE Journal of Biomedical and Health Informatics, IEEE Transactions on Visualization and Computer Graphics, Journal of Computer Science and Technology, Computer Animation and Virtual Worlds, ROBOMECH journal
Ludovic Hoyet, ACM Transactions on Graphics, IEEE Transactions on Visualization and Computer Graphics, Computers & Graphics, Computer Graphics Forum
Anne-Hélène Olivier, Gait and Posture, Motor Control
Antonio Mucherino, Journal of Global Optimization, Springer, Discrete Applied Mathematics, Elsevier
Armel Crétual, Journal of Electromyography and Kinesiology, Clinical Biomechanics, Gait & Posture, Journal of Orthopaedic Research, Medical & Biological Engineering & Computing, PlosOne
Richard Kulpa, Human Movement Science, 2017, Sport Medicine (SMOA), 2017, CGVCVIP, 2017
Marc Christie, IEEE TVCG, The Visual Computer, CGF
Anne-Hélène Olivier, VenLab, Brown University, Providence USA, August 2017
Anne-Hélène Olivier, IFSTTAR Marne la Vallée, November 2017
Antonio Mucherino, Institute of Computer Technology, TU Vienna, Austria. Invited by N. TaheriNejad. April 2017.
Antonio Mucherino, Research Center for Applied Sciences, Academia Sinica, Taipei, Taiwan. Invited by J-H. Lin. June 2017.
Franck Multon, ANR expert, member of the ANR CPDS 4 "Santé Bien-être" through the Allistene national Alliance to design the next ANR call for projects, CRSNG (Canada)
Armel Crétual, 5 projects evaluated for UEFISCDI (The Executive Agency for Higher Education, Research, Development and Innovation Funding, Romania)
Marc Christie, expert for CIR (credit impot recherche – two cases in 2017), expert for ANR (CE 27)
Franck Multon is member of the University Rennes2 Research steering committee "commission recherche", and Academic Council "CAC",
Franck Multon is member of the M2S Lab steering committee,
Franck Multon is member of the UFR-APS steering committee in University Rennes2
Ludovic Hoyet is an elected member of the Board of Managers for the French Computer Graphics Association (Association Française d'Informatique Graphique) since Oct. 2017
Ludovic Hoyet participated in the local MESR PhD Grant Auditions 2017
Georges Dumont is president of the elected group at scientific council of École Normale Supérieure de Rennes, member of the scientific council of École Normale Supérieure de Rennes
Georges Dumont is scientific head of Immerstar platforms (Immersia + Immermove) jointly for Inria and Irisa Partners
Richard Kulpa is member of the University Rennes2 Research steering committee "commission recherche", Academic Council "CAC" and Committee of International Affairs "CAI"il "CAC",
Benoit Bideau is director of the M2S Lab
Doctorat : Ludovic Hoyet & Anne-Hélène Olivier, Evaluations en informatique graphique et réalité virtuelle : concepts généraux et cas pratiques, 2.25h, Journée Jeunes Chercheurs du GDR IG-RV 2017, France
Master : Franck Multon, co-leader of the IEAP Master (1 and 2) "Ingénierie et Ergonomie de l'Activité Physique", STAPS, University Rennes2, France
Master : Franck Multon, "Santé et Performance au Travail : étude de cas", leader of the module, 30H, Master 1 M2S, University Rennes2, France
Master : Franck Multon, "Analyse Biomécanique de la Performance Motrice", leader of the module, 30H, Master 1 M2S, University Rennes2, France
Master: Charles Pontonnier, "Numerical methods", leader of the module, 36H, Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Numerical simulation of mechanical systems", leader of the module, 18H, Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Analytical Mechanics" , 40H, Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Robotics: practical views", 24H, International cadets seminar, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Design and control of legged robots", leader of the module, 38H, Electronics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Design, simulation and control of mechanical systems", leader of the module, 24H, Lecturers training in mechatronics, École Normale Supérieure de Rennes, France
Master: Charles Pontonnier, "Musculoskeletal modeling and ergonomics", 3H, Master 1 M2S, University Rennes2, France
Master : Georges Dumont, Responsible of the second year of the master Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
Master : Georges Dumont, Mechanical simulation in Virtual reality, 36H, Master Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
Master : Georges Dumont, Mechanics of deformable systems, 40H, Master FE, École Normale Supérieure de Rennes, France
Master : Georges Dumont, oral preparation to agregation competitive exam, 20H, Master FE, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Vibrations in Mechanics, 10H, Master FE, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Multibody Dynamics, 9H, Master FE, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Finite Element method, 12H, Master FE, École Normale Supérieure de Rennes, France
Master : Ludovic Hoyet, Motion for Animation and Robotics, 9h, University Rennes 1, France
Master : Ludovic Hoyet, Motion Analysis and Gesture Recognition, 12h, INSA Rennes, France
Master : Anne-Hélène Olivier, "Biostatstiques", 7H, Master 2 APPCM, University Rennes2, France
Master : Anne-Hélène Olivier, "Evaluation fonctionnelle des pathologies motrices", 9H Master 2 APPCM, University Rennes2, France
Master : Anne-Hélène Olivier, "Maladie neurodégénératives : aspects biomécaniques", 2H Master 1 APPCM, University Rennes2, France
Master : Anne-Hélène Olivier, "Biostatstiques", 7H, Master 1 EOPS, University Rennes2, France
Master : Anne-Hélène Olivier, "Méthodologie", 4H, Master 1 EOPS/APPCM, University Rennes2, France
Master : Anne-Hélène Olivier, "Contrôle moteur : Boucle perceptivo-motrice", 3H, Master 1IEAP, Université Rennes 2, France
Master: Antonio Mucherino, “Programmation Parallèle”, M1 en Informatique, 16H, University of Rennes, France
Master: Fabrice Lamarche, "Compilation pour l'image numérique", 29h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Synthèse d'images", 12h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Synthèse d'images avancée", 28h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Modélisation Animation Rendu", 36h, Master 2, ISTIC, University of Rennes 1, France
Master: Fabrice Lamarche, "Jeux vidéo", 26h, Master 2, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Motion for Animation and Robotics", 9h, Master 2 SIF, ISTIC, University of Rennes 1, France.
Master : Armel Crétual, "Méthodologie", leader of the module, 20H, Master 1 M2S, University Rennes2, France
Master : Armel Crétual, "Biostatstiques", leader of the module, 30H, Master 2 M2S, University Rennes2, France
Master : Richard Kulpa, "Boucle analyse-modélisation-simulation du mouvement", 27h, leader of the module, Master 2, Université Rennes 2, France
Master : Richard Kulpa, "Méthodes numériques d'analyse du geste", 27h, leader of the module, Master 2, Université Rennes 2, France
Master : Richard Kulpa, "Cinématique inverse", 3h, leader of the module, Master 2, Université Rennes 2, France
Master: Marc Christie, "Multimedia Mobile", Master 2, leader of the module, 32h, Computer Science, University of Rennes 1, France
Master: Marc Christie, "Projet Industriel Transverse", Master 2, 32h, leader of the module, Computer Science, University of Rennes 1, France
Master: Marc Christie, "Outils pour la Conception d'IHM", Master 2, 32h, leader of the module, Computer Science, University of Rennes 1, France
Licence : Franck Multon, "Ergonomie du poste de travail", Licence STAPS L2 & L3, University Rennes2, France
Licence: Charles Pontonnier, "Numerical control", leader of the module, Electronics, École Inter-Armes de Saint-Cyr Coëtquidan, France
Licence : Anne-Hélène Olivier, "Analyse cinématique du mouvement", 100H , Licence 1, University Rennes 2, France
Licence : Anne-Hélène Olivier, "Anatomie fonctionnelle", 7H , Licence 1, University Rennes 2, France
Licence : Anne-Hélène Olivier, "Effort et efficience", 12H , Licence 2, University Rennes 2, France
Licence : Anne-Hélène Olivier, "Locomotion et handicap", 12H , Licence 3, University Rennes 2, France
Licence : Anne-Hélène Olivier, "Biomécanique spécifique aux APA", 8H , Licence 3, University Rennes 2, France
Licence : Anne-Hélène Olivier, "Biomécanique du viellissement", 12H , Licence 3, University Rennes 2, France
Licence: Antonio Mucherino, “Informatique 1”, 80H, L1, University of Rennes, France
Licence: Fabrice Lamarche, "Initiation à l'algorithmique et à la programmation", 56h, License 3, ESIR, University of Rennes 1, France
License: Fabrice Lamarche, "Programmation en C++", 46h, License 3, ESIR, University of Rennes 1, France
Licence: Fabrice Lamarche, "IMA", 24h, License 3, ENS Rennes, ISTIC, University of Rennes 1, France
Licence : Armel Crétual, "Analyse cinématique du mouvement", 100H, Licence 1, University Rennes 2, France
Licence : Richard Kulpa, "Biomécanique (dynamique en translation et rotation)", 48h, Licence 2, Université Rennes 2, France
Licence : Richard Kulpa, "Méthodes numériques d'analyse du geste", 48h, Licence 3, Université Rennes 2, France
Licence : Richard Kulpa, "Statistiques et informatique", 15h, Licence 3, Université Rennes 2, France
Licence : Marc Christie, "Programmation Impérative 1", leader of the module, University of Rennes 1, France
PhD: Antoine Muller, Contributions méthodologiques à l'analyse musculo-squelettique de l'humain dans l'objectif d'un compromis précision performance (Methodological contributions to the musculoskeletal analysis of humans with a trade-off between performance and precision), Ecole normale supérieure, Georges Dumont & Charles Pontonnier
PhD: Marion Morel, Suivi et étude des interactions pour l'analyse des tactiques durant un match de basket-ball, UPMC - University Rennes 2, septembre 2014, Catherine Achard & Séverine Dubuisson & Richard Kulpa
PhD in progress (beginning september 2017): Simon Hilt, Haptique Biofidèle pour l'Interaction en réalité virtuelle, Ecole normale supérieure, Georges Dumont& Charles Pontonnier
PhD in progress (beginning september 2017): Pierre Puchaud, Développement d'un modèle musculo-squelettique générique du soldat en vue du support de son activité physique, Ecole normale supérieure, Charles Pontonnier& Nicolas Bideau & Georges Dumont
PhD in progress: Rebecca Fribourg, Enhancing Avatars in Virtual Reality through Control, Interactions and Feedback, Sept. 2017, Ferran Argelaguet & Ludovic Hoyet & Anatole Lécuyer
PhD in progress: Florian Berton, Design of a virtual reality platform for studying immersion and behaviours in aggressive crowds, Nov. 2017, Ludovic Hoyet & Anne-Hélène Olivier & Julien Petré
PhD in progress: Sean D. Lynch, Perception visuelle du mouvement humain dans les interactions lors de tâches locomotrices, M2S - University Rennes 2, september 2015, Anne-Hélène Olivier & Richard Kulpa
PhD in progress: Amaury Louarn, A topology-driven approach to retargeting of filmic style in 3D environments, University of Rennes 1, Oct. 2017, Marc Christie & Fabrice Lamarche & Franck Multon
PhD in progress: Florence Gaillard, Evaluation en situation écologique des capacités fonctionnelles des membres supérieurs d’enfants hémiplégiques, University Rennes 2, decembre 2015, Armel Crétual & Isabelle Bonan
PhD in progress: Karim Jamal, Les effets des stimulations sensorielles par vibration sur les perturbations posturales secondaires à des troubles de la cognition spatiale après un Accident vasculaire Cérébrale, University Rennes 2, septembre 2016, Isabelle Bonan & Armel Crétual
PhD in progress: Lyse Leclercq, Intérêt dans les activités physiques du rétablissement de la fonction inertielle des membres supérieurs en cas d’amputation ou d’atrophie, University Rennes 2, septembre 2017, Armel Crétual
PhD in progress: Sean D. Lynch, Perception visuelle du mouvement humain dans les interactions lors de tâches locomotrices, M2S - University Rennes 2, septembre 2015, Anne-Hélène Olivier & Richard Kulpa
PhD in progress: Pierre Touzard, Suivi longitudinal du service de jeunes joueurs de tennis élite : identification biomécanique des facteurs de performance et de risque de blessures, University Rennes 2, septembre 2014, Benoit Bideau & Richard Kulpa & Caroline Martin
PhD in progress: Yacine Said Bouhalia, Approche transversale pour l’analyse et la reconnaissance de gestes 2D et 3D, INSA of Rennes, septembre 2015, Richard Kulpa & Franck Multon & Eric Anquetil
PhD in progress: Charles Faure, Stratégies coopératives et compétitives dans des tâches d'interactions physiques multiples, Université Rennes 2 - ENS Rennes, septembre 2016, Benoit Bideau & Richard Kulpa
PhD in progress: Théo Perrin, Evaluation de l’influence des feedbacks sur la capacité d’apprentissage dans le cadre d’interactions complexes entre joueurs et influence de ces feedbacks en fonction de l’activité sportive, Université Rennes 2 - ENS Rennes, septembre 2017, Benoit Bideau & Richard Kulpa
PhD: Alexandra PIMENTA DOS SANTOS, Dynamical synthesis and analysis of healthy and pathological human walking, Université Pierre et Marie Curie, November 2017, Franck Multon, President
PhD: Galo Xavier MALDONADO TORO, Analysis and Generation of Highly Dynamic Motions of Anthropomorphic Systems: Application to Parkour. Université Fédérale Toulouse Midi-Pyrénées. November 2017, Franck Multon, Rapporteur.
PhD: 2017, Channarong Trakunsaranakom, Proposal for tangible, intuitive and collaborative design of manufactured products through virtual and augmented reality environments, Université de Grenoble-Alpes. 2017 june 21st, Georges Dumont Rapporteur.
PhD: 2017, Jingtao Chen, Biomechanical analysis of different aspects in virtual reality, Université de Grenoble-Alpes, 2017 january 30, Georges Dumont, Rapporteur.
PhD: 2017, Anne-Laure Kervellec, Introduction d’un logiciel dans la rééducation : optimisation et évaluation de l’engagement, Université Rennes 2, January 5th 2017, Armel Crétual, Examinateur
HDR: Sophie SAKKA, Entre l'humain et l'humanoïde. Ecole Centrale de Nantes. December 2017, Franck Multon, President.
Ludovic Hoyet, Journées de la Science 2017. Présentation et démo "Ma main à 6 doigts: Avatar et Réalité Virtuelle" à des classes de primaire et lycée.
Ludovic Hoyet, démonstrations pour les 50 ans d'Inria à Paris, "Ma main à 6 doigts: Avatar et Réalité Virtuelle".