MimeTIC is a multidisciplinary team whose aim is to better understand and model human activity in order to simulate realistic autonomous virtual humans: realistic behavior, realistic motions and realistic interactions with other characters and users. It leads to modeling the complexity of a human body, his environment where he can pick-up information and he can act on it. A specific focus is dedicated to human physical activity and sports as it raises the highest constraints and the highest complexity when addressing these problems. Thus, MimeTIC is composed of experts in computer science whose research interests are computer animation, behavioral simulation, motion simulation, crowds and interaction between real and virtual humans. MimeTIC is also composed of experts in sports science, motion analysis, motion sensing, biomechanics and motion control. Hence, the scientific foundations of MimeTIC are motion sciences (biomechanics, motion control, perception-action coupling, motion analysis), computational geometry (modeling of the 3D environment, motion planning, path planning) and design of protocols in immersive environments (use of virtual reality facilities to analyze human activity).
Thanks to these skills, we wish to reach the following objectives: to make virtual human behave, move and interact in a natural manner in order to increase immersion and to improve knowledge on human motion control. In real situations (see Figure ), people have to deal with their physiological, biomechanical and neurophysiological capabilities in order to reach a complex goal. Hence MimeTIC addresses the problem of modeling the anatomical, biomechanical and physiological properties of human being. Moreover this character has to deal with his environment. Firstly he has to perceive this environment and pick-up relevant information. MimeTIC thus addresses the problem of modeling the environment including its geometry and associated semantic information. Secondly, he has to act on this environment to reach his goal. It leads to cognivitve processes, motion planning, joint coordination and force production in order to act on this environment.
In order to reach the above objectives, MimeTIC has to address three main challenges:
dealing with the intrinsic complexity of human being, especially when addressing the problem of interactions between people for which it is impossible to predict and model all the possible states of the system,
making the different components of human activity control (such as the biomechanical and physical, the reactive, cognitive, rational and social layers) interact while each of them is modeled with completely different states and time sampling,
and being able to measure human activity while dealing with the compromise between ecological and controllable protocols, and to be able to extract relevant information in wide databases of information.
Contrary to many classical approaches in computer simulation, which mostly propose simulation without trying to understand how real people do, the team promotes a coupling between human activity analysis and synthesis, as shown in Figure .
In this research path, improving knowledge on human activity enables us to highlight fundamental assumptions about natural control of human activities. These contributions can be promoted in e.g. biomechanics, motion sciences, neurosciences. According to these assumptions we propose new algorithms for controlling autonomous virtual humans. The virtual humans can perceive their environment and decide of the most natural action to reach a given goal. This work is promoted in computer animation, virtual reality and has some applications in robotics through collaborations. Once autonomous virtual humans have the ability to act as real humans should do in the same situation, it is possible to make them interact with other autonomous characters (for crowds or group simulations) and with real users. The key idea here is to analyze to what extent the assumptions proposed at the first stage lead to natural interactions with real users. This process enables the validation of both our assumptions and our models.
Among all the problems and challenges described above, MimeTIC focuses on the following domains of research:
motion sensing which is a key issue to extract information from raw motion capture systems and thus to propose assumptions on how people control their activity,
human activity & virtual reality, which is explored through sports application in MimeTIC. This domain enables the design of new methods for analyzing the perception-action coupling in human activity, and to validate whether the autonomous characters lead to natural interactions with users,
crowds and groups simulation which is dedicated to model the interactions in small groups of individuals and to see how to extend to larger groups, such as crowds with lot of individual variability,
virtual storytelling which enables us to design and simulate complex scenarios involving several humans who have to satisfy numerous complex constraints (such as adapting to the real-time environment in order to play an imposed scenario), and to design the coupling with the camera scenario to provide the user with a real cinematographic experience,
biomechanics which is essential to offer autonomous virtual humans who can react to physical constraints in order to reach high-level goals, such as maintaining balance in dynamic situation or selecting a natural motor behavior among all the theoretical solution space for a given task,
and autonomous characters which is a transversal domain that can reuse the results of all the other domains to make these heterogeneous assumptions and models provide the character with natural behaviors and autonomy.
Human motion control is a very complex phenomenon that involves several layered systems, as shown in Figure . Each layer of this controller is responsible for dealing with perceptual stimuli in order to decide the actions that should be applied to the human body and his environment. Due to the intrinsic complexity of the information (internal representation of the body and mental state, external representation of the environment) used to perform this task, it is almost impossible to model all the possible states of the system. Even for simple problems, there generally exist infinity of solutions. For example, from the biomechanical point of view, there are much more actuators (i.e. muscles) than degrees of freedom leading to infinity of muscle activation patterns for a unique joint rotation. From the reactive point of view there exist infinity of paths to avoid a given obstacle in navigation tasks. At each layer, the key problem is to understand how people select one solution among these infinite state spaces. Several scientific domains have addressed this problem with specific points of view, such as physiology, biomechanics, neurosciences and psychology.
In biomechanics and physiology, researchers have proposed hypotheses based on accurate joint modeling (to identify the real anatomical rotational axes), energy minimization, force and torques minimization, comfort maximization (i.e. avoiding joint limits), and physiological limitations in muscle force production. All these constraints have been used in optimal controllers to simulate natural motions. The main problem is thus to define how these constraints are composed altogether such as searching the weights used to linearly combine these criteria in order to generate a natural motion. Musculoskeletal models are stereotyped examples for which there exist infinity of muscle activation patterns, especially when dealing with antagonist muscles. An unresolved problem is to define how using the above criteria to retrieve the actual activation patterns while optimization approaches still lead to unrealistic ones. It is still an open problem that will require multidisciplinary skills including computer simulation, constraint solving, biomechanics, optimal control, physiology and neurosciences.
In neuroscience, researchers have proposed other theories, such as coordination patterns between joints driven by simplifications of the variables used to control the motion. The key idea is to assume that instead of controlling all the degrees of freedom, people control higher level variables which correspond to combination of joint angles. In walking, data reduction techniques such as Principal Component Analysis have shown that lower-limb joint angles are generally projected on a unique plan whose angle in the state space is associated with energy expenditure. Although there exist knowledge on specific motion, such as locomotion or grasping, this type of approach is still difficult to generalize. The key problem is that many variables are coupled and it is very difficult to objectivly study the behavior of a unique variable in various motor tasks. Computer simulation is a promising method to evaluate such type of assumptions as it enables to accurately control all the variables and to check if it leads to natural movements.
Neurosciences also address the problem of coupling perception and action by providing control laws based on visual cues (or any other senses), such as determining how the optical flow is used to control direction in navigation tasks, while dealing with collision avoidance or interception. Coupling of the control variables is enhanced in this case as the state of the body is enriched by the big amount of external information that the subject can use. Virtual environments inhabited with autonomous characters whose behavior is driven by motion control assumptions is a promising approach to solve this problem. For example, an interesting problem in this field is navigation in an environment inhabited with other people. Typically, avoiding static obstacles together with other people displacing into the environment is a combinatory problem that strongly relies on the coupling between perception and action.
One of the main objectives of MimeTIC is to enhanche knowledge on human motion control by developing innovative experiments based on computer simulation and immersive environments. To this end, designing experimental protocols is a key point and some of the researchers in MimeTIC have developed this skill in biomechanics and perception-action coupling. Associating these researchers to experts in virtual human simulation, computational geometry and constraints solving enable us to contribute to enhance fundamental knowleged in human motion control.
Understanding interaction between humans is very challenging because it addresses many complex phenomena including perception, decision-making, cognition and social behaviors. Moreover, all these phenomena are difficult to isolate in real situations, it is thus very complex to understand the influence of each of them on the interaction. It is then necessary to find an alternative solution that can standardize the experiments and that allows the modification of only one parameter at a time. Video was first used since the displayed experiment is perfectly repeatible and cut-offs (stop the video at a specific time before its end) allow having temporal information. Nevertheless, the absence of adapted viewpoint and stereoscopic vision does not provide depth information that are very meaningful. Moreover, during video recording session, the real human is acting in front of a camera and not an opponent. The interaction is then not a real interaction between humans.
Virtual Reality (VR) systems allow full standardization of the experimental situations and the complete control of the virtual environment. It is then possible to modify only one parameter at a time and observe its influence on the perception of the immersed subject. VR can then be used to understand what information are picked up to make a decision. Moreover, cut-offs can also be used to obtain temporal information about when these information are picked up. When the subject can moreover react as in real situation, his movement (captured in real time) provides information about his reactions to the modified parameter. Not only is the perception studied, but the complete perception-action loop. Perception and action are indeed coupled and influence each other as suggested by Gibson in 1979.
Finally, VR allows the validation of the virtual human models. Some models are indeed based on the interaction between the virtual character and the other humans, such as a walking model. In that case, there are two ways to validate it. First, they can be compared to real data (e.g. real trajectories of pedestrians). But such data are not always available and are difficult to get. The alternative solution is then to use VR. The validation of the realism of the model is then done by immersing a real subject in a virtual environment in which a virtual character is controlled by the model. Its evaluation is then deduced from how the immersed subject reacts when interacting with the model and how realistic he feels the virtual character is.
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. It aims at studying algorithms for combinatorial, topological and metric problems concerning sets of points in Euclidian spaces. Combinatorial computational geometry focuses on three main problem classes: static problems, geometric query problems and dynamic problems.
In static problems, some input is given and the corresponding output needs to be constructed or found. Such problems include linear programming, Delaunay triangulations, and Euclidian shortest paths for instance. In geometric query problems, commonly known as geometric search problems, the input consists of two parts: the search space part and the query part, which varies over the problem instances. The search space typically needs to be preprocessed, in a way that multiple queries can be answered efficiently. Some typical problems are range searching, point location in a portioned space, nearest neighbor queries for instance. In dynamic problems, the goal is to find an efficient algorithm for finding a solution repeatedly after each incremental modification of the input data (addition, deletion or motion of input geometric elements). Algorithms for problems of this type typically involve dynamic data structures. Both of previous problem types can be converted into a dynamic problem, for instance, maintaining a Delaunay triangulation between moving points.
The Mimetic team works on problems such as crowd simulation, spatial analysis, path and motion planning in static and dynamic environments, camera planning with visibility constraints for instance. The core of those problems, by nature, relies on problems and techniques belonging to computational geometry. Proposed models pay attention to algorithms complexity to be compatible with performance constraints imposed by interactive applications.
Autonomous characters are becoming more and more popular has they are used in an increasing number of application domains. In the field of special effects, virtual characters are used to replace secondary actors and generate highly populated scenes that would be hard and costly to produce with real actors. In video games and virtual storytelling, autonomous characters play the role of actors that are driven by a scenario. Their autonomy allows them to react to unpredictable user interactions and adapt their behavior accordingly. In the field of simulation, autonomous characters are used to simulate the behavior of humans in different kind of situations. They enable to study new situations and their possible outcomes.
One of the main challenges in the field of autonomous characters is to provide a unified architecture for the modeling of their behavior. This architecture includes perception, action and decisional parts. This decisional part needs to mix different kinds of models, acting at different time scale and working with different nature of data, ranging from numerical (motion control, reactive behaviors) to symbolic (goal oriented behaviors, reasoning about actions and changes).
In the MimeTIC team, we focus on autonomous virtual humans. Our problem is not to reproduce the human intelligence but to propose an architecture making it possible to model credible behaviors of anthropomorphic virtual actors evolving/moving in real time in virtual worlds. The latter can represent particular situations studied by psychologists of the behavior or to correspond to an imaginary universe described by a scenario writer. The proposed architecture should mimic all the human intellectual and physical functions.
Biomechanics is obviously a very large domain. This large set can be divided regarding to the scale at which the analysis is performed going from microscopic evaluation of biological tissues’ mechanical properties to macroscopic analysis and modeling of whole body motion. Our topics in the domain of biomechanics mainly lie within this last scope.
The first goal of such kind of research projects is a better understanding of human motion. The MimeTic team addresses three different situations: everyday motions of a lambda subject, locomotion of pathological subjects and sports gesture.
In the first set, Mimetic is interested in studying how subjects maintain their balance in highly dynamic conditions. Until now, balance havec nearly always been considered in static or quasi-static conditions. The knowledge of much more dynamic cases still has to be improved. Our approach has demonstrated that first of all, the question of the parameter that will allow to do this is still open. We have also taken interest into collision avoidance between two pedestrian. This topic includes the research of the parameters that are interactively controlled and the study of each one’s role within this interaction.
When patients, in particular those suffering from central nervous system affection, cannot have an efficient walking it becomes very useful for practicians to benefit from an objective evaluation of their capacities. To propose such help to patients following, we have developed two complementary indices, one based on kinematics and the other one on muscles activations. One major point of our research is that such indices are usually only developed for children whereas adults with these affections are much more numerous.
Finally, in sports, where gesture can be considered, in some way, as abnormal, the goal is more precisely to understand the determinants of performance. This could then be used to improve training programs or devices. Two different sports have been studied: the tennis serve, where the goal was to understand the contribution of each segments of the body in ball’s speed and the influence of the mechanical characteristics of the fin in fin swimming.
After having improved the knowledge of these different gestures a second goal is then to propose modeling solutions that can be used in VR environments for other research topics within MimeTic. This has been the case, for exemple, for the colision avoidance.
Crowd simulation is a very active and concurrent domain. Various disciplines are interested in crowds modeling and simulation: Mathematics, Cognitive Sciences, Physics, Computer Graphics, etc. The reason for this large interest is that crowd simulation raise fascinating challenges.
At first, crowd can be first seen as a complex system: numerous local interactions occur between its elements and results into macroscopic emergent phenomena. Interactions are of various nature and are undergoing various factors as well. Physical factors are crucial as a crowd gathers by definition numerous moving people with a certain level of density. But sociological, cultural and psychological factors are important as well, since crowd behavior is deeply changed from country to country, or depending on the considered situations. On the computational point of view, crowd push traditional simulation algorithms to their limit. An element of a crowd is subject to interact with any other element belonging the same crowd, a naive simulation algorithm has a quadratic complexity. Specific strategies are set to face such a difficulty: level-of-detail techniques enable scaling large crowd simulation and reach real-time solutions.
MimeTIC is an international key contributor in the domain of crowd simulation. Our approach is specific and based on three axis. First, our modeling approach is founded on human movement science: we conducted challenging experiment on the motion of groups. Second: we developed high-performance solutions for crowd simulation. Third, we develop solutions for realistic navigation in virtual world to enable interaction with crowds in Virtual Reality.
Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user’s body parts. Whatever the system is, one of the main is to be able to automatically recognize and analyze the user’s performance according to poor and noisy signals. Human activity and motion are subject to variability: intra-variability due to space and time variations of a given motion, but also inter-variability due to different styles and anthropometric dimensions. MimeTIC has addressed the above problems in two main directions.
Firstly, we have studied how to recognize and quantify motions performed by a user when using accurate systems such as Vicon (product of Oxford Metrics) or Optitrack (product of Natural Point) motion capture systems. These systems provide large vectors of accurate information. Due to the size of the state vector (all the degrees of freedom) the challenge is to find the compact information (named features) that enables the automatic system to recognize the performance of the user. Whatever the method is used, finding these relevant features that are not sensitive to intra-individual and inter-individual variability is a challenge. Some researchers have proposed to manually edit these features (such as a Boolean value stating if the arm is moving forward or backward) so that the expertise of the designer is directly linked with the success ratio. Many proposals for generic features have been proposed, such as using Laban notation which was introduced to encode dancing motions. Other approaches tend to use machine learning to automatically extract these features. However most of the proposed approaches were used to seek a database for motions which properties correspond to the features of the user’s performance (named motion retrieval approaches). This does not ensure the retrieval of the exact performance of the user but a set of motions with similar properties.
Secondly, we wish to find alternatives to the above approach which is based on analyzing accurate and complete knowledge on joint angles and positions. Hence new sensors, such as depth-cameras (Kinect, product of Microsoft) provide us with very noisy joint information but also with the surface of the user. Classical approaches would try to fit a skeleton into the surface in order to compute joint angles which, again, lead to large state vectors. An alternative would be to extract relevant information directly from the raw data, such as the surface provided by depth cameras. The key problem is that the nature of these data may be very different from classical representation of human performance. In MimeTIC, we try to address this problem in specific application domains that require picking specific information, such as gait asymmetry or regularity for clinical analysis of human walking.
Sport is characterized by complex displacements and motions. These motions are dependent on visual information that the athlete can pick up in his environment, including the opponent’s actions. The perception is thus fundamental to the performance. Indeed, a sportive action, as unique, complex and often limited in time, requires a selective gathering of information. This perception is often seen as a prerogative for action, it then takes the role of a passive collector of information. However, as mentioned by Gibson in 1979, the perception-action relationship should not be considered sequential but rather as a coupling: we perceive to act but we must act to perceive. There would thus be laws of coupling between the informational variables available in the environment and the motor responses of a subject. In other words, athletes have the ability to directly perceive the opportunities of action directly from the environment. Whichever school of thought considered, VR offers new perspectives to address these concepts by complementary using real time motion capture of the immersed athlete.
In addition to better understanding sports and interaction between athletes, VR can also be used as a training environment as it can provide complementary tools to coaches. It is indeed possible to add visual or auditory information to better train an athlete. The knowledge found in perceptual experiments can be for example used to highlight the body parts that are important to look at to correctly anticipate the opponent’s action.
Interactive digital storytelling, including novel forms of edutainment and serious games, provides access to social and human themes through stories which can take various forms and contains opportunities for massively enhancing the possibilities of interactive entertainment, computer games and digital applications. It provides chances for redefining the experience of narrative through interactive simulations of computer- generated story worlds and opens many challenging questions at the overlap between computational narratives, autonomous behaviours, interactive control, content generation and authoring tools.
Of particular interest for the Mimetic research team, virtual storytelling triggers challenging opportunities in providing effective models for enforcing autonomous behaviours for characters in complex 3D environments. Offering both low-level capacities to characters such as perceiving the environments, interacting with the environment and reacting to changes in the topology, on which to build higher-levels such as modelling abstract representations for efficient reasonning, planning paths and activities, modelling cognitive states and behaviours requires the provision of expressive, multi-level and efficient computational models. Furthermore virtual storytelling requires the seamless control of the balance between the autonomy of characters and the unfolding of the story story through the narrative discourse. Virtual storytelling also raises challenging questions on the conveyance of a narrative through interactive or automated control of the cinematography (how to stage the characters, the lights and the cameras). For example, estimating visibility of key subjects, or performing motion planning for cameras and lights are central issues for which have not received satisfactory answers in the litterature.
The design of workstations nowadays tends to include assessment steps in a Virtual Environment (VE) to evaluate ergonomic features. This approach is more cost-effective and convenient since working directly on the Digital Mock-Up (DMU) in a VE is preferable to constructing a real physical mock-up in a Real Environment (RE). This is substantiated by thefact that a Virtual Reality (VR) set-up can be easily modified, enabling quick adjustmentsofthe workstation design.Indeed, the aim of integrating ergonomics evaluation tools inVE is to facilitate the design process, enhance the design efficiency, and reduce the costs.
The development of such platforms ask for several improvements in the field of motion analysis and VR: the interactions have to be as fidelistic as possible to properly mimic the motions performed in real environments, the fidelity of the simulator need also to be correctly evaluated, and motion analysis tools have to be able to provide in real-time biomechanics quantities usable by ergonomists to analyse and improve the working conditions.
Populate is a toolkit dedicated to task scheduling under time and space constraints in the field of behavioral animation. It is currently used to populate virtual cities with pedestrian performing different kind of activities implying travels between different locations. However the generic aspect of the algorithm and underlying representations enable its use in a wide range of applications that need to link activity, time and space. The main scheduling algorithm relies on the following inputs: an informed environment description, an activity an agent needs to perform and individual characteristics of this agent. The algorithm produces a valid task schedule compatible with time and spatial constraints imposed by the activity description and the environment. In this task schedule, time intervals relating to travel and task fulfillment are identified and locations where tasks should be performed are automatically selected.
The software provides the following functionalities:
A high level XML dialect that is dedicated to the description of agents activities in terms of tasks and sub activities that can be combined with different kind of operators : sequential, without order, interlacedŠThis dialect also enables the description of time and location constraints associated to tasks.
An XML dialect that enables the description of agent's personal characteristics.
An informed graph describes the topology of the environment as well as the locations where tasks can be performed. A bridge between TopoPlan and Populate has also been designed. It provides an automatic analysis of an informed 3D environment that is used to generate an informed graph compatible with Populate.
The generation of a valid task schedule based on the previously mentioned descriptions.
With a good configuration of agents characteristics (based on statistics), we demonstrated that tasks schedules produced by Populate are representative of human ones. In conjunction with TopoPlan, it has been used to populate a district of Paris as well as imaginary cities with several thousands of pedestrians navigating in real time.
In our previous biomechanical analysis of the tennis serve, we have demonstrated that the energy flow is a pathomechanical factor, that means that it can increase joint constraints (and thus risk of injury) while not increasing performance. Nevertheless, the definition and evaluation of energy flow is still a complex scientific challenge. We have proposed to compare the energy flow during the serve between injured and non-injured tennis players by investigating the relationships between the quality and magnitude of energy flow, the ball velocity and the peaks of upper limb joint kinetics . The results showed that ball velocity increased and upper limb joint kinetics decreased with the quality of energy flow from the trunk to the ‘hand+racket’. Injured players showed a lower quality of energy flow through the upper limb kinetic chain, a lower ball velocity and higher rates of energy absorbed by the shoulder, the elbow than non-injured players. These findings imply that an effective energy flow through the kinetic chain by using a proper serve technique is necessary for reducing overuse joint injury risks.
Crowds for entertainment or safety applications purposes are most of the time simulated using microscopic algorithms. In contrast with other types of approaches, microscopic simulators are able to generate continuous and smooth trajectories for individual agents. They are based on models of local interactions between agents. The crowd motion result form the combination of all local motion and interactions. The fact that the resulting crowd motion is emergent makes difficult anticipating the simulation results. Many motion and interaction models have been proposed to design a plethora of simulation algorithms: force-based models, rule-based models, coupled or not with flow-based models, etc. Each type of interaction models will actually result into specific crowd motions as well as agents trajectories. Unfortunately, not all have the desired properties: oscillations, jerky trajectories, residual collisions or deadlocks are often observed in simulations. From this point of view, the course presents the many recent progresses in crowd simulation algorithms since the introduction of velocity-based algorithms , as well as the impact on the level of realism and the visual quality of simulated crowd motions. It also presents the impact on various kind of applications.
The paper
Previous works in MimeTIC have shown the advantage of using VR to design and carry-out experiments on perception-action coupling in sports, especially for duels between two opponents. However the impact of using various technical solutions to carry-out this type of experiment in sports is not clear. Indeed immersion is performed by using interfaces to capture the motion/intention of the user and to deliver various multi-sensory feedbacks. These interfaces may affect the perception-action loop so that results obtained in VR cannot be systematically transferred to real practice.
Most of the applications in VR provide the user with visual feedbacks in which the avatar of the user can be more or less simplified (sometimes limited to a hand or the tools he his carrying). In first person view in caves the user generally does not need accurate avatars as he can perceive his real body but some authors have shown that the perception of distances is generally modified. Some authors have also demonstrated that first person view was less efficient that third person view with avatars when performing accurate tasks such as reaching objects in constrained environments. We proposed an experiment to evaluate which type of feedback was the most appropriate one for complex precision tasks, such as basketball free-throw. In basketball free-throw the user has to throw a ball into a small basket placed at over 4.5m far from him. Thus perception of distance is actually a key point in such a task. Beginners and experts carried-out a first experiment in real in order to measure their motion and performance in real situation. Then beginners were asked to perform free throws with a real ball in hands, but in three conditions in a Cave (Immersia): 1) first person view (see Figure ), 2) third-person view with the visual feedback of the ball's position, and 3) third-person view the virtual ball and additional rings modeling the perfect trajectory for the ball to get in the basket. Results show that significant difference exist in ball speed between first person view condition compared to real condition whereas no difference exist in third-person view conditions. If we focus on successful throws only, ball speed in the last condition 3) was very similar to real condition whereas all the other VR conditions (1) and 2)) lead to significant differences compared to real situation. In all VR conditions the height of ball release was significantly higher in VR compared to real situation. These results show that VR conditions lead to adaptations in the way people perform such a precision task, especially for ball speed and height of ball release. However this difference is significantly higher with first person view and tends to zero in condition 3). Future works will tend to evaluate new conditions with avatars and complementary points of view (such as lateral and frontal views together as suggested by some authors). It will also be important to more clearly understand the problem of perception of distances in such an environment. This work has been performed in cooperation with University of Brassov in Romania . This paper has received the best paper award of the ACM VRST 2014 Conference.
Another key feedback is the external forces associated with the task. In most sports applications such forces are strongly linked to performance. However delivering these forces in virtual environments is still a challenge as it required haptic devices that could affect the way the users perform the task (with a different grip compared to real situation and limitations in dynamic response of the device). Pseudohaptic has been introduced in the early 2000. It consists in using visual feedbacks to make people perceive the forces linked to a task. However this approach has not been tested for whole-body interaction. In collaboration with Hybrid team in Inria Rennes, we studied how the visual animation of a self-avatar could be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup participants could watch their self-avatar in a virtual environment in mirror mode. They could map their gestures on the self-animated avatar in real-time using a Kinect. The experimental task consisted in a weight lifting with virtual dumbbells that participants could manipulate by means of a tangible stick. We introduce three kinds of modification of the visual animation of the self-avatar: 1) an amplification (or reduction) of the user motion (change in C/D ratio), 2) a change in the dynamic profile of the motion (temporal animation), or 3) a change in the posture of the avatar (angle of inclination). An example is depicted in Figure . Thus, to simulate the lifting of a ”heavy” dumbbell, the avatar animation was distorted in real-time using: an amplification of the user motion, a slower dynamics, and a larger angle of inclination of the avatar. We evaluated the potential of each technique using an ordering task with four different virtual weights. Our results show that the ordering task could be well achieved with every technique. The C/D ratio-based technique was found the most efficient. But participants globally appreciated all the different visual effects, and best results could be observed in the combination configuration . Our results pave the way to the exploitation of such novel techniques in various VR applications such as for sport training, exercise games, or industrial training scenarios in single or collaborative mode.
Sensing human activity is a very active field of research, with a wide range of applications ranging from entertainment and serious games to personal ambient living assistance, including rehabilitation. MimeTIC aims at proposing original methods to process raw motion capture data in order to compute relevant information according to the application.
In rehabilitation, we have collaborated with University of Montreal, Saint-Justine Hospital which main activity is rehabilitation of children with pathologies of the pyramidal control system. In this domain, defining metrics and relevant measurement to diagnose pathologies and to monitor patients during treatment is a key point. In gait, most of the previous works focus on gait spatio-temporal parameters (such as step length, frequency, stride duration, global speed...) which could be measured with two main families of systems: 1) one-point measurement with a force plate, one accelerometer or dedicated devices (such as a Gait Ride), or 2) multi-point measurement systems with motion sensors or markers placed over the patient's skin. The former provides the clinician with compact but incomplete knowledge whereas the latter provides him with numerous data which are sometimes difficult to analyze and to get (specific technical skills are required). The first step to any type of analysis is to detect the main gait events, such as foot strikes and toe offs. In treadmill walking, widely used in rehabilitation as it enables the clinician to analyze numerous gait cycles in a limited place with a controlled speed, automatically detecting such gait events requires complex devices with specific technical skills (such as calibration and post-processing with motion capture systems) .
Recent papers have demonstrated that low-cost and easy-to-use depth cameras (such as a Kinect from Microsoft) look promising for serious applications requiring motion capture. However there exist some confusion between the feet and the ground at foot strike and foot off leading to bad estimation of the gait cycle events. We have proposed an alternative approach that consists in using the strong correlation between knee and foot trajectories to deduce foot strikes thanks to knee movements. Indeed, the maximal distance between knees along the longitudinal axis provides us with very accurate gait events detection compared to previous works. We have validated this detection events on walking patterns that were also altered by placing a 5cm-sole below the left (resp. right) foot of the subject to create asymetry. The results show that this gait cycle event detection based on depth images is as accurate as using reference methods based on accurate motion capture systems.
The use of virtual reality tools for ergonomics applications is a very important challenge in order to genezalize the use of such devices for the design of workstations.
We proposed in collaboration with Thierry Duval (Lab-Stic, Telecom Bretagne, Brest) a new architecture for information sharing and bridging in collaborative virtual environements in application to ergonomics studies. We particularly presented this year how we implemented the design engineer role in the collaborative environment , . We are currently evaluating the complete framework for collaborative ergonomics by defining use-cases and trying to find the best design mode to efficiently solve this problem. Moreover, we have developed and evaluated some manipulation techniques, such as the 7-handle technique which is particularly efficient to manipulate large objects in an immersive environments , . A demonstration of this technique has been presented during the ICAT-EGVE conference .
We also contributed in the on-site motion analysis field. Microsoft Kinect is a promising tool to evaluate human poses without markers, calibration and manual post-processing. It has been applied to a wide set of applications, such as entertainment, rehabilitation, sports analysis and more recently in ergonomics. In MimeTIC we wish to develop innovative approaches based on a Kinect in order to assess the potential risks of musculoskeletal disorders. However analyzing humans in work places is challenging because of many potential occlusions and displacements of the user. Hence it is a key point to evaluate to which extent this method could be applied to real work places, in real work condition. Most of previous works aiming at evaluating the Kinect sensor generally focus on simple 2D poses. In this work we proposed to evaluate the reliability of Kinect measurements for assessing the movement of operators in ergonomic studies with complex 3D upper-limb poses . To this end we asked subjects to perform complex 3D arm motions concurrently measured with a Kinect and a Vicon motion capture system. The results demonstrated that most of the poses were correctly estimated with the Kinect but specific poses are badly reconstructed, leaded to errors going up 30°. Hence, experimenter should take this information into account when using a Kinect on a work place in order to avoid experimenting these bad results.
At last, we proposed a new approach for the use of virtual reality with haptics in the Product Development Process loop for testing deformable parts by introducing the user in the loop and proposing a two-stage deformation simulation method for real time haptic interaction. Such an approach is of importance to let the designer be able to handle and validate the design of a product or a workstation respecting multiple constraints, e.g. ergonomics, bulk or productivity. This approach has been fully detailed in a book chapter published this year
A common issue in three-dimensional animation is the creation of contacts between a virtual creature and the environment. Contacts allow force exertion, which produces motion. This paper addresses the problem of computing contact configurations allowing to perform motion tasks such as getting up from a sofa, pushing an object or climbing. We propose a two-step method to generate contact configurations suitable for such tasks. The first step is an offline sampling of the range of motion (ROM) of a virtual creature. The ROM of the human arms and legs is precisely determined experimentally. The second step is a run time request confronting the samples with the current environment. The best contact configurations are then selected according to a heuristic for task efficiency. The heuristic is inspired by the force transmission ratio. Given a contact configuration, it measures the potential force that can be exerted in a given direction. The contact configurations are then used as inputs for an inverse kinematics solver that will compute the final animation. Our method is automatic and does not require examples or motion capture data. It is suitable for real time applications and applies to arbitrary creatures in arbitrary environments. Various scenarios (such as climbing, crawling, getting up, pushing or pulling objects) are used to demonstrate that our method enhances motion autonomy and interactivity in constrained environments , . In Figure , a character is able to select the most appropriate constraints to pull a heavy cupboard by putting a foot on an obstacle to maximize the force ratio.
Bio-inpired controllers and planners are compelling for avatar animation. We are currently engaging several works on the subject within the frame of the ENTRACTE project.
Ana-Lucia Cruz-Ruiz has been recruited as a PhD student since november 2013. The goal of this thesis is to define and evaluate muscle-based controllers for avatar animation.A first result has been obtained in defining and validating a bio-inspired limb controller based on a linearizing loop of a neuromuscular complex. Application on a one-dof limb has been validated by comparing the muscle activation shapes obtained in simulation with standard records of biceps and triceps activation .
When planning a path in their environment, pedestrians do not consider every detail at once. Instead, people first plan a coarse path, choosing streets to travel to reach their goal. Local decisions such as where to cross a street or on which side to pass by a pole are taken during navigation. In computer science, hierarchical representations of an environment are often used to reduce the computation cost of the planning algorithm. Such a representation also enables smarter navigation behaviours. Indeed, it others the opportunity to delay the local planning until relevant information is available. It also enables a quick recovery from unexpected events, as the high-level path might stay valid even if unexpected events alter the lower-level path.
We proposed a method that automatically generates a three-level hierarchical representation of an informed urban environment. In this hierarchy, each level is a semantically coherent partition of the navigation areas and can be used to plan paths at different levels of abstraction. This representation is used in a path planning process that delays some decisions until relevant information is perceived. This algorithm uses path options to smartly adapt the path when unexpected events occur.
At first sight, in clinical practice, shoulder mobility is frequently evaluated through mono-axial amplitude. Interestingly, for diagnosing shoulder hyperlaxity or frozen shoulder, external rotation of the arm whilst at the side (ER1) is commonly used. We first gave a definition of hyperlaxity, as described actually in the literature, and its link with shoulder instability and treatment. After looking for an optimized way to examine external rotation of the shoulder, we proposed the definition of a novel index to quantify global shoulder mobility, the Shoulder Configuration Space Volume (SCSV) corresponding to the reachable volume in the configuration space of the shoulder joint . Then, this new index was examined through correlation to shoulder signs of hyperlaxity .
This contract aims at developing new ergonomics assessments based on inaccurate Kinect measurements in manufactures on real workers. The main challenges are:
being able to improve the Microsoft Kinect measurement in order to extract accurate poses from depth images while occlusions may occur,
developing new inverse dynamics methods based on such inaccurate kinematic data in order to estimate the joint torques required to perform the observed task,
and proposing a new assessment tool to translate joint torques and poses into potential musculoskeletal disorders risks.
Faurecia has developed its own assessment tool but it requires tedious and subjective tasks for the user, at specific times in the work cycle. By using Kinect information we aim at providing more objective data over the whole cycle not only for specific times. We also wish to make the user focus on the interpretation and understanding of the operator's tasks instead of taking time estimating joint angles in images.
This work is performed in close collaboration with an ergonomist in Faurecia together with the software development service of the company to design the new version of their assessment tool. This tool will be first evaluated on a selection of manufacture sites and will then be spread worldwide among the 270 Faurecia sites in 33 countries.
This contract enabled us to hire Pierre Plantard as a PhD student to carry-out this work in MimeTIC and M2S Lab. He started in January 2013 and will finish in December 2015.
In 2014, we have developed a testbench based on virtual humans in order to evaluate the expected accuracy of a Kinect sensor in work conditions: the Kinect cannot be placed at a location compatible with the provider's recommendation. This testbench enabled us to evaluate more than 500000 configurations (Kinect location and upper-limb poses) with a virtual mannequin and a simulated Kinect. It will help to design the most appropriate protocol according to the work condition and the poses used by the operators at workstation.
The iSpace&Time project is founded by the ANR and gathers six partners: IGN, Lamea, University of Rennes 1, LICIT (IFSSTAR), Telecom ParisTech and the SENSE laboratory (Orange). The goal of this project is the establishment of a demonstrator of a 4D Geographic Information System of the city on the web. This portal will integrate technologies such as web2.0, sensor networks, immersive visualization, animation and simulation. It will provide solutions ranging from simple 4D city visualization to tools for urban development. Main aspects of this project are:
Creation of an immersive visualization based on panoramic acquired by a scanning vehicle using hybrid scanning (laser and image).
Fusion of heterogeneous data issued by a network of sensor enabling to measure flows of pedestrians, vehicles and other mobile objects.
Use of video cameras to measure, in real time, flows of pedestrians and vehicles.
Study of the impact of a urban development on mobility by simulating vehicles and pedestrians.
Integration of temporal information into the information system for visualization, data mining and simulation purpose.
The mimetic team is involved in the pedestrian simulation part of this project. This project started in 2011 and ended in november 2014.
The goal of RePLiCA project is to build and test a new rehabilitation program for facial praxia in children with cerebral palsy using an interactive device.
In a classical rehabilitation program, the child tries to reproduce the motion of his/her therapist. The feedback he/she has lays on the comparison of different modalities: the gesture of the therapist he/she has seen few seconds ago (visual space) and his/her own motion (proprioceptive space). Unfortunately, besides motor troubles these children often have some cognitive troubles and among them a difficulty to convert the information from a mental space to another one.
The principle of our tool is that during a rehabilitation session the child will observe simultaneously on the same screen an avatar, the virtual therapist’s one, performing the gesture to be done, and a second avatar animated from the motion he actually performs. To avoid the use of a too complex motion capture system, the child will be filmed by a simple video camera. One first challenge is thus to be able to capture the child's facial motion with enough accuracy. A second one is to be able to provide him/her an additional feedback upon the gesture quality comparing it to a database of healthy children of the same age.
RePLiCA did start in january 2012 and will end in July 2015.
Cinecitta is a 3-year young researcher project funded by the French Research Agency (ANR) lead by Marc Christie. The project started in October 2012 and will end in October 2015.
The main objective of Cinecitta is to propose and evaluate a novel workflow which mixes user interaction using motion-tracked cameras and automated computation aspects for interactive virtual cinematography that will better support user creativity. We propose a novel cinematographic workflow that features a dynamic collaboration of a creative human filmmaker with an automated virtual camera planner. We expect the process to enhance the filmmaker's creative potential by enabling very rapid exploration of a wide range of viewpoint suggestions. The process has the potential to enhance the quality and utility of the automated planner's suggestions by adapting and reacting to the creative choices made by the filmmaker. This requires three advances in the field. First, the ability to generate relevant viewpoint suggestions following classical cinematic conventions. The formalization of these conventions in a computationally efficient and expressive model is a challenging task in order to select and propose the user with a relevant subset of viewpoints among millions of possibilities. Second, the ability to analyze data from real movies in order to formalize some elements of cinematographic style and genre. Third, the integration of motion-tracked cameras in the workflow. Motion-tracked cameras represent a great potential for cinematographic content creation. However given that tracking spaces are of limited size, there is a need to provide novel interaction metaphors to ease the process of content creation with tracked cameras. Finally we will gather feedback on our prototype by involving professionals (during dedicated workshops) and will perform user evaluations with students from cinema schools.
The ANR project ENTRACTE is a collaboration between the Gepetto team in LAAS, Toulouse (head of the project) and the Inria/MimeTIC team. The project started in November 2013 and will end in August 2017. The purpose of the ENTRACTE project is to address the action planning problem, crucial for robots as well as for virtual human avatars, in analyzing human motion at a biomechanical level and by defining from this analysis bio-inspired motor control laws and bio-inspired paradigms for action planning. The project is launched since november 2013 and Ana-Lucia Cruz-Ruiz has been recruited as a PhD student since this date to begin to work on musculoskeletal-based methods for avatar animation. Moreover, Steve Tonneau, a PhD student currently in third year is also developing bio-inspired posture generators for avatar navigation in encumbered environments.
The ADT-MAN-IP aims at proposing a common production pipeline for both MimeTIC and Hybrid teams. This pipeline intends to facilitate the production of populated virtual reality environments.
The pipeline starts with the motion capture of an actor, using motion capture devices such as a Vicon (product of Oxford Metrics) system. To do so, we need to design new methods to automatically adapt all motion captures data to an internal skeleton that can be reused to retarget the motion to various types of skeletons and characters. The purpose is then to play this motion capture data on any type of virtual characters used in the demos, regardless their individual skeletons and morphology. The key point here is to make this process be as automatic as possible. During the first year, a software toolbox has been developed in Motion Builder (product of Autodesk) to automate this process. We also developed automatic following methods to make virtual humans locomote along a given path in the environment in Unity 3D.
The second step in the pipeline is to design a high level scenario framework to describe a virtual scene and the possible user's interactions with this scene so that he/she can interact with the story directly. This work will be performed in 2015.
In this ADT we also will have to connect these two opposite parts into a unique framework that can be used by non-experts in computer animation to design new immersive experiments involving autonomous virtual humans. The resulting framework could consequently be used in the Immersia immersive room for various types of application.
Our actual Virtual Reality systems allowed us to be a key partner within the European Project VISIONAIR (http://
This project is built with the participation of 26 partners, INPG ENTREPRISE SA IESA France , Institut Polytechnique de Grenoble France, University of Patras LMS Greece, Cranfield University United Kingdom, Universiteit Twente Utwente Netherlands, Universitaet Stuttgart Germany, Instytut Chemii Bioorganicznej Pan Psnc Poland, Université De La Méditerranée D'aix-Marseille II France, Consiglio Nazionale Delle Ricerche CNR Italy, Institut National de Recherche en Informatique et en Automatique Inria France, Kungliga Tekniska Hoegskolan Sweden, Technion - Israel Institute of Technology Israel, Rheinisch-Westfaelische Technische Hochschule Aachen RWTH Germany, Poznan University of Technology Poland, Arts et Métiers ParisTech AMPT France, Technische Universitaet Kaiserslautern Germany, The University of Salford United Kingdom, Fraunhofer-gesellschaft zur foerderung der Angewandten Forschung Germany, fundacio privada I2CAT Spain, University of Essex United Kingdom, Magyar Tudomanyos Akademia Szamitastechnikai Es Automatizalasi Kutato Intezet Hungary, École Centrale de Nantes France, University College of London United Kingdom, Politecnico di Milano Polimi Italy, European Manufacturing and Innovation Research Association (cluster leading excellence).
We organized the General Assembly of VISIONAIR, in Rennes Bretagne Atlantique Inria Research centre from 2014, dec. second to dec. fourth. We hosted 60 participants and had very interesting scientific presentations.
Title: Fostering Research on Models for Storytelling Applications
International Partner (Institution - Laboratory - Researcher):
National Cheng Chi University (TAIWAN)
Duration: 2013 - 2015
See also: http://
The application context targeted by this proposal is Interactive Virtual Storytelling. The growing importance of this form of media reveals the necessity to re-think and re-assess the way narratives are traditionally structured and authored. In turn, this requires from the research community to address complex scientific and technical challenges at the intersection of literature, robotics, artificial intelligence, and computer graphics. This joint collaboration addresses three key issues in virtual storytelling: (i) delivering better authoring tools for designing interactive narratives based on literary-founded narrative structures, (ii) establishing a bridge between the semantic level of the narrative and the geometric level of the final environment to enable the simulation of complex and realistic interactive scenarios in 3D, and (iii) providing a full integration of the cinematographic dimension through the control of high-level elements of filmic style (pacing, preferred viewpoints, camera motion). The project is founded on a past solid collaboration and will rely on the team's complementarity to achieve the tasks through the development of a joint research prototype.
Title: Toward realistic and efficient simulation of highly complex systems
International Partner (Institution - Laboratory - Researcher):
University of North Carolina at Chapel Hill (ÉTATS-UNIS)
Duration: 2012 - 2014
See also: http://
The general goal of SIMS is to make significant progress toward realistic and efficient simulation of highly complex systems which raise combinatory explosive problems. This proposal is focused on human motion and interaction, and covers 3 active topics with wide application range: 1. Crowd simulation: virtual human interacting with other virtual humans, 2. Autonomous virtual humans: who interact with their environment, 3. Physical Simulation: real humans interacting with virtual environments. SIMS is orthogonally structured by transversal questions: the evaluation of the level of realism reached by a simulation (which is a problem by itself in the considered topics), considering complex systems at various scales (micro, meso and macroscopic ones), and facing combinatory explosion of simulation algorithms.
Scientific Chair: Franck Multon, "Maintien de l'équilibre debout en situation dynamique chez l'être humain", special day of the French Society of Biomechanics, co-organized with Pierre-Brice Wieber and Thomas Robert, Paris, May 16th 2014
Richard Kulpa, Symposium "Réalité virtuelle et équilibre", SOFPEL (Société Francophone Posture Equilibre Locomotion), Rennes, 5 décembre 2014
Franck Multon, ACM VRST 2014, Edinburgh, UK, November 2014
Franck Multon, ACM Motion in Games MIG 2014, Los Angeles, US, November 2014
Franck Multon, Computer Animation and Social Agents CASA 2014, Houston, May 2014
Franck Multon, IEEE VR 2015, ACM SIGGRAPH Asia 2014
Armel Crétual, EuroGraphics'15
Richard Kulpa, IEEE 3DUI
Georges Dumont, ASME-DECT 2014, IDMME-Virtual concept 2014
Franck Multon, Presence, MIT Press
Armel Crétual, Journal of Electromyography and Kinesiology, Elsevier
Franck Multon, Frontiers in Neuroscience, PlosOne,
Armel Crétual, Transactions on Visualization and Computer Graphics, Journal of Electromyography and Kinesiology, Gait & Posture, Medical & Biological Engineering & Computing
Charles Pontonnier, IEEE Transactions on Visualization and Computer Graphics, International Journal of Virtual Reality
Master : Franck Multon, "Images et Mouvement - IMO", leader of the module, 20H, Master 2 research in computer sciences, University Rennes1, France
Master : Franck Multon, "Santé et Performance au Travail : étude de cas", leader of the module, 30H, Master 1 M2S, University Rennes2, France
Master : Franck Multon, "Analyse Biomécanique de la Performance Motrice", leader of the module, 30H, Master 1 M2S, University Rennes2, France
Master : Franck Multon, "Modélisation et Simulation du Mouvement", leader of the module, 30H, Master 2 M2S, University Rennes2, France
Master: Marc Christie, "Multimedia Mobile", leader of the module, Computer Science, University of Rennes 1, France
Master : Armel Crétual, "Méthodologie", leader of the module, 20H, Master 1 M2S, University Rennes2, France
Master : Armel Crétual, "Biostatstiques", leader of the module, 15H, Master 2 M2S, University Rennes2, France
Master: Charles Pontonnier, "Numerical methods", leader of the module, Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Numerical simulation of mechanical systems", leader of the module, Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Analytical Mechanics" , Mechanics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Design and control of mobile robots", leader of the module, Electronics, École Spéciale Militaire de Saint-Cyr Coëtquidan, France
Master: Charles Pontonnier, "Design, simulation and control of mechanical systems", leader of the module, Lecturers training in mechatronics, École Normale Supérieure de Rennes, France
Master : Richard Kulpa, "Contrôle moteur", leader of the module, Master 1 M2S, Université Rennes 2, France
Master : Richard Kulpa, "Boucle analyse-modélisation-simulation du mouvement", leader of the module, Master 2 M2S, Université Rennes 2, France
Master : Richard Kulpa, "Méthodes numériques d'analyse du geste", leader of the module, Master 2 M2S, Université Rennes 2, France
Master : Richard Kulpa, "Cinématique inverse", leader of the module, Master 2 M2S, Université Rennes 2, France
Master: Fabrice Lamarche, "Compilation pour l'image numérique", 29h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Synthèse d'images", 12h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Synthèse d'images avancée", 28h, Master 1, ESIR, University of Rennes 1, France
Master: Fabrice Lamarche, "Modélisation Animation Rendu", 36h, Master 2, ISTIC, University of Rennes 1, France
Master: Fabrice Lamarche, "Jeux vidéo", 26h, Master 2, ESIR, University of Rennes 1, France
Master : Georges Dumont, Mechanical simulation in Virtual reality, 36H, Master Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
Master : Georges Dumont, Responsible of the second year of the master Mechatronics, Rennes 1 University and École Normale Supérieure de Rennes, France
Master : Georges Dumont, Mechanics of deformable systems, 40H, Master FES, École Normale Supérieure de Rennes, France
Master : Georges Dumont, oral preparation to agregation competitive exam, 20H, Master FES, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Vibrations in Mechanics, 10H, Master FES, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Multibody Dynamics, 9H, Master FES, École Normale Supérieure de Rennes, France
Master : Georges Dumont, Finite Element method, 12H, Master FES, École Normale Supérieure de Rennes, France //
Licence : Franck Multon, "Ergonomie du poste de travail", Licence STAPS L2 & L3, University Rennes2, France
Licence : Marc Christie, "Système d'information Tactiques", Computer Science, University of Rennes 1, France
Licence : Marc Christie, "Programmation Impérative 1", leader of the module, University of Rennes 1, France
Licence : Armel Crétual, "Analyse cinématique du mouvement", 100H, Licence 1, University Rennes 2, France
Licence: Charles Pontonnier, "Numerical control", leader of the module, Electronics, École Inter-Armes de Saint-Cyr Coëtquidan, France
Licence : Richard Kulpa, "Biomécanique (dynamique en translation et rotation)", Licence 2, Université Rennes 2, France
Licence : Richard Kulpa, "Méthodes numériques d'analyse du geste", Licence 3, Université Rennes 2, France
Licence : Richard Kulpa, "Statistiques et informatique", Licence 3, Université Rennes 2, France
Licence: Fabrice Lamarche, "Initiation à l'algorithmique et à la programmation", 56h, License 3, ESIR, University of Rennes 1, France
License: Fabrice Lamarche, "Programmation en C++", 46h, License 3, ESIR, University of Rennes 1, France
Licence: Fabrice Lamarche, "IMA", 24h, License 3, ENS Rennes, ISTIC, University of Rennes 1, France
PhD : Antoine Marin, Le mouvement segmentaire au service du déplacement dans la marche : analyse couplée des deux niveaux, University Rennes 2, 15th of December 2014, Armel Crétual
PhD : Mickael Ropars, Contribution clinique et biomécanique au diagnostic d’hyperlaxité de l’éaule, University Rennes 2, 7th of April 2014, Isabelle Bonan & Armel Crétual
PhD : Marion Morel, Suivi et étude des interactions pour l'analyse des tactiques durant un match de basket-ball, UPMC - University Rennes 2, septembre 2014, Catherine Achard & Séverine Dubuisson & Richard Kulpa
PhD : Pierre Touzard, Suivi longitudinal du service de jeunes joueurs de tennis élite : identi?cation biomécanique des facteurs de performance et de risque de blessures, University Rennes 2, septembre 2014, Benoit Bideau & Richard Kulpa & Caroline Martin
PhD : Steve Tonneau, Synthèse et planification de mouvement pour des personnages virtuels en environnements contraints, INSA Rennes, 2011-2015, Franck Multon & Julien Pettré
PhD : Pierre Plantard, Estimation des efforts musculaires à partir de données in situ pour l’évaluation ergonomique d’un poste de travail, 2013-2016, Franck Multon & Anne-Sophie LePierres
PhD: Philippe Rannou, Modèle rationnel pour les humains virtuels autonomes, University of Rennes 1, Fabrice Lamarche& Marie Odile Cordier
PhD: Carl-Johan Jorgensen, Peuplement automatisé d’environnements urbains pour l’étude et la validation d’aménagements, University of Rennes 1, Fabrice Lamarche& Kadi Bouatouch
PhD: Ana Lucia Cruz Ruiz, Design of a modular and multiscale musculoskeletal model as a support to motion analysis-synthesis, Ecole normale supérieure, Georges Dumont
PhD: Kevin Jordao, Peuplement massif de maquettes numériques immenses, University Rennes1, Julien Pettré
PhD: Julien Bruneau, Foules immersives, University Rennes1, Julien Pettré
PhD : Antoine Marin, Le mouvement segmentaire au service du déplacement dans la marche : analyse couplée des deux niveaux, Université Rennes2, December 15th, Franck Multon, examinateur
PhD : Benjamin Goislard de Monsabert, Individualisation des paramètres musculaires pour la modélisation musculo-squelettique de la main : application à la compréhension de l’arthrose, Université Aix-Marseille, Franck Multon, président
PhD : Simon Courtemanche, Analyse et Simulation des Mouvements Optimaux en Escalade, Université de Grenoble, Franck Multon, rapporteur
PhD : Chenggang Wang, Disassembly sequences generation and evaluation. Integration in virtual reality environment, Université de Grenoble, Georges Dumont, rapporteur
PhD : Thi Thuong Huyen Nguyen, Proposition of new metaphors and techniques for 3D interaction and navigation preserving immersion and facilitating collaboration between distant users, Insa de Rennes, Georges Dumont, examinateur
HDR : Didier Pradon, Modélisation et caractérisation des activités locomotrices des sujets cérébrolésés, UVSQ, Franck Multon, président
HDR : Christine Aevedo, Assistance fonctionnelle : exploiter les fonctions résiduelles du système sensori-moteur déficient, Franck Multon, Université de Montpellier 2, rapporteur