Section: New Results
Biomechanics for motion analysis-synthesis
Participants : Charles Pontonnier, Georges Dumont, Franck Multon, Antoine Muller, Diane Haering.
The PhD thesis of Antoine Muller defended on june, the 26  aimed at democratizing the use of musculoskeletal analysis for a wide range of users. The work proposed contributions enabling better performances of such analyses and preserving accuracy, as well as contributions enabling an easy subject-specific model calibration. Firstly, in order to control the whole analysis process, the work is developed in a global approach of all the analysis steps: kinematics, dynamics  and muscle forces estimation. For all of these steps, quick analysis methods have been proposed. Particularly, a quick muscle force sharing problem resolution method  has been proposed, based on interpolated data. Moreover, a complete calibration process , based on classical motion analysis tools available in a biomechanical lab has been developed, based on motion capture and force platform data.
Diane Haering, Inria Post-doctoral fellow at MimeTIC works on the the determination of maximal torque enveloppes of the elbow. These results could have a great potential of application to quantify the articular load during work tasks , to help calibrating muscle parameters into musculoskeletal simulations . The method has been integrated in a more global subject specific calibration method . This method could also be used to better represent musculoskeletal models as in .
Ana-Lucia Cruz-Ruiz was a PhD student from november 2013 to december 2016. The goal of this thesis was to define and evaluate muscle-based controllers for motion control. This PhD was related to the ANR Entracte project. She developed an original control approach to reduce the redudancy of the musculoskeletal system. A low-dimensional representation of control mechanisms in throwing motions from a variety of subjects and target distances was proposed. The control representation stands at the kinematic level in task and joint spaces respectively, and at the muscle activation level using the theory of muscle synergies. Representative features were chosen and extracted using factorization and clustering techniques from the muscle data leading to better represent mechanisms hidden behind such dynamical motions, and could offer a promising control representation for synthesizing motions with muscle-driven characters .
Interactions between walkers
Participants : Anne-Hélène Olivier, Armel Crétual, Richard Kulpa, Sean Lynch, Laurentius Meerhoff.
Interaction between people, and especially local interaction between walkers, is a main research topic of MimeTIC. We propose experimental approaches using both real and virtual environments to study both perception and action aspects of the interaction. Our efforts to validate the virtual reality platform to study interactions was acknowledged by a publication in IEEE TVCG 2017  and was presented in IEEE VR 2017 conference . Using the VR platform, we investigated the nature of visual information that is used for a collision free interaction. We aimed to manipulate the nature of visual information in two forms, global and local information appearances. The obstacle was presented with one of five virtual appearances, associated to global motion cues (i.e., a cylinder or a sphere), or local motion cues (i.e., only the legs or the trunk). A full body virtual walker, showing both local and global motion cues, used as a reference condition. The final crossing distance was affected by the global motion appearances, however, appearance had no qualitative effect on motion adaptations. These findings contribute towards further understanding what information people use when interacting with others. This work was published in TVCG 2017  and presented as a poster in the ACAPS 2017 Conference. This year, we also developed new experiments in our immersive platform. We designed a study to investigate the effect of gaze interception during collision avoidance between two walkers. In such a situation, mutual gaze can be considered as a form of nonverbal communication. Additionally, gaze is believed to detail future path intentions and to be part of the nonverbal negotiation to achieve avoidance collaboratively. We considered an avoidance task between a real subject and a virtual human character and studied the influence of the character's gaze direction on the avoidance behaviour of the participant. Virtual reality provided us with an accurate control of the situation: seventeen participants were immersed in a virtual environment, instructed to navigate across a virtual space using a joystick and to avoid a virtual character that would appear from either side. The character would either gaze or not towards the participant. Further, the character would either perform or not a reciprocal adaptation of its trajectory to avoid a potential collision with the participant. The findings of this paper were that during an orthogonal collision avoidance task, gaze behaviour did not influence the collision avoidance behaviour of the participants. Further, the addition of reciprocal collision avoidance with gaze did not modify the collision behaviour of participants. These results suggest that for the duration of interaction in such a task, body motion cues were sufficient for coordination and regulation. We discuss the possible exploitation of these results to improve the design of virtual characters for populated virtual environments and to interact with users. These results were presented to the AFRV 2017 conference  and submitted to IEEE VR 2018 conference.
We also provide lot of efforts to investigate, in collaboration with Julien Pettré from Inria Lagadic team, the process involved in the selection of interactions within our neighbourood. Considering the complex case of multiple interactions, we performed experiments in real conditions where a participant walked across a room whilst either one (i.e., pairwise) or two (i.e., group) participants crossed the room perpendicularly. By comparing these pairwise and group interactions, we assessed whether a participant avoids two upcoming collisions simultaneously, or as sequential pairwise interactions. Furthermore, in the group trials we varied the relative position of the two participants that crossed the trajectory of the other. This allowed us to change the affordance of passing through or around (i.e., its ‘pass-ability’). Results showed that in the group trials, participants consistently avoided collision with lower risks of impending collision (as quantified by the future distance of closest approach) in the group compared to the pairwise trials. This implies that a participant – to some extent – interacted simultaneously with two other participants. Furthermore, we analysed in the group trials how the ‘pass-ability’ evolved over time. Results indicated that the affordance of passing through or around was already established early in the interaction. This shows that participants are susceptible to the affordance of passing through a gap between others. We concluded that pedestrians are able to interact with two other walkers simultaneously, rather than treating each interaction in sequence. These results were presented at the ICPA 2017 conference .
Finally, we continue working on the interaction between a walker and a moving robot. This work was performed in collaboration with Philippe Souères and Christian Vassallo (LAAS, Toulouse). The development of Robotics accelerated these recent years, it is clear that robots and humans will share the same environment in a near future. In this context, understanding local interactions between humans and robots during locomotion tasks is important to steer robots among humans in a safe manner. Our work is a first step in this direction. Our goal is to describe how, during locomotion, humans avoid collision with a moving robot. We just published in Gait and Posture our results on collision avoidance between participants and a non-reactive robot (we wanted to avoid the effect of a complex loop by a robot reacting to participants’ motion). Our objective was to determine whether the main characteristics of such interaction preserve the ones previously observed: accurate estimation of collision risk, anticipated and efficient adaptations. We observed that collision avoidance between a human and a robot has similarities with human-human interactions (estimation of collision risk, anticipation) but also leads to major differences . Humans preferentially give way to the robot, even if this choice is not optimal with regard to motion adaptation to avoid the collision. In this new study, we considered the situation where the robot was reactive to the walker's motion. First of all, it results that humans have a good understanding of the robot behavior and their reaction are smoother and faster with respect to the case with a non-collaborative robot. Second, humans adapt similarly to human-human study and the crossing order is respected in almost all cases. These results have strong similarities with the ones observed with two humans crossing each other.
New automatic methods to assess motion in industrial contexts based on Kinect
Participants : Franck Multon, Georges Dumont, Charles Pontonnier, Pierre Plantard, Antoine Muller.
Recording human activity is a key point of many applications and fundamental works. Numerous sensors and systems have been proposed to measure positions, angles or accelerations of the user's body parts. Whatever the system is, one of the main challenge is to be able to automatically recognize and analyze the user's performance according to poor and noisy signals. Hence, recognizing and measuring human performance are important scientific challenges especially when using low-cost and noisy motion capture systems. MimeTIC has addressed the above problems in two main application domains. In this section, we detail the ergonomics application of such an approach. Firstly, in ergonomics, we explored the use of low-cost motion capture systems (i.e., a Microsoft Kinect) to measure the 3D pose of a subject in natural environments, such as on a workstation, with many occlusions and inappropriate sensor placements. Predicting the potential accuracy of the measurement for such complex 3D poses and sensor placements is challenging with classical experimental setups. After having evaluated the actual accuracy of the pose reconstruction method delivered by the Kinect, we have identified that occlusions were a very important problem to solve in order to obtain reliable ergonomic assessments in real cluttered environments. To this end, we developed an approach to deal with long occlusions that occur in real manufacturing conditions. This approach is based on a structured database of examples (named filtered pose graph) that enables real-time correction of Kinect skeleton data .
This method has been applied to a complete ergonomic process outputting RULA scores based on the reconstructed and corrected poses. We challenged this method with a reference motion capture system in laboratory conditions . To this end we compared joint angles and RULA scores obtained with our system and a reference Vicon mocap system in various conditions (with and without occlusions). The results show a very good accordance between manually tuned RULA scores given by experts and those computed by the automatic system. These results demonstrate that it could be used in industrial context to support the ergonomists decision-making process.
This year we also extended this work to evaluate if corrected data enabled us to estimate reliable joint torques using inverse dynamics to provide new information to ergonomic assessment . Indeed, joint torques and forces are relevant quantities to estimate the biomechanical constraints of working tasks in ergonomics. However, inverse dynamics requires accurate motion capture data, which are generally not available in real manufacturing plants. Markerless and calibrationless measurement systems based on depth cameras, such as the Microsoft Kinect, are promising means to measure 3D poses in real time, such as using our corrected Kinect approach. Thus, we evaluated the reliability of an inverse dynamics method based on this corrected skeleton data and its potential use to estimate joint torques and forces in such cluttered environments. To this end, we compared the calculated joint torques with those obtained with a reference inverse dynamics method based on an optoelectronic motion capture system. Results show that the Kinect skeleton data enabled the inverse dynamics process to deliver reliable joint torques in occlusion-free (r=0.99 for the left shoulder elevation) and occluded (r=0.91 for the left shoulder elevation) environments. However, differences remain between joint torques estimations. Such reliable joint torques open appealing perspectives for the use of new fatigue or solicitation indexes based on internal efforts measured on site. The study demonstrates that corrected Kinect data could be used to estimate internal joint torques, using an adapted inverse dynamics method. The method could be applied on-site because it can handle some cases with occlusions. The resulting Kinect-based method is easy-to-use, real-time and could assist ergonomists in risk evaluation on site.
This work was partially funded by the Faurecia company through a Cifre convention.
Clinical gait assessment based on Kinect data
Participant : Franck Multon.
In clinical gait analysis, we proposed a method to overcome the main limitations imposed by the low accuracy of the Kinect measurements in real medical exams. Indeed, inaccuracies in the 3D depth images lead to badly reconstructed poses and inaccurate gait event detection. In the latter case, confusion between the foot and the ground leads to inaccuracies in the foot-strike and toe-off event detection, which are essential information to get in a clinical exam. To tackle this problem we assumed that heel strike events could be indirectly estimated by searching for the extreme values of the distance between the knee joints along the walking longitudinal axis. As Kinect sensor may not accurately locate the knee joint, we used anthropometrical data to select a body point located at a constant height where the knee should be in the reference posture. Compared to previous works using a Kinect, heel strike events and gait cycles are more accurately estimated, which could improve global clinical gait analysis frameworks with such a sensor. Once these events are correctly detected, it is possible to define indexes that enable the clinician to have a rapid state of the quality of the gait. We therefore proposed a new method to assess gait asymmetry based on depth images, to decrease the impact of errors in the Kinect joint tracking system. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. The movement of artificially impaired gaits was recorded using both a Kinect placed in front of the subject and a motion capture system. The proposed longitudinal index distinguished asymmetrical gait, while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect skeleton measurements. This gait asymmetry index measured with a Kinect is low cost, easy to use and is a promising development for clinical gait analysis.
This method has been challenged with other classical approaches to assess gait asymmetry using either cheap Kinect data or Vicon data. We demonstrate the superiority of the approach when using Kinect data for which traditional approaches failed to accurately detect gait asymmetry. It has been validated on healthy subjects who were forced to walk with a 5cm sole placed below each foot alternatively. In 2017 , we compared the results obtained with the famous Constant Relative Phase (CRP) that aims at quantifying within-stride asymmetry index. CRP requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. As our index, the Longitudinal Asymmetry Index (ILong), is obtained using data from a low-cost depth camera (Kinect) (depth images averaged over several gait cycles), rather than derived joint positions or angles, we checked that it could deliver more reliable asymmetry information within gait, compared to CRP. Hence, this study aimed to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. the results show that CRP based on times derivatives of joint angles failed to detect gait asymmetry, when using Kinect data. However, our index, ILong, detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring.
This work has been done in collaboration with the MsKLab from Imperial College London, to design new gait asymmetry indexes that could be used in daily clinical analysis.
Biomechanical analysis of tennis serve
Participants : Caroline Martin, Richard Kulpa, Benoit Bideau, Pierre Touzard.
Following the previous studies we made on tennis serve, we were able to evaluate the link between performance and risk of injuries. To go further, we made new experiments on top-level young French players (between 12 up to 18 years old) to quantify the motor technical errors made (kinematics) and the impact on the risk of injury (dynamics). This experiments are part of a collaboration with the FFT (French Tennis Federation). We recently validated that the Waiter's serve implies higher risk of injuries . It is a movement that was know by the coaches as not productive and risky but it was never validated.