Section: New Results
Motion & Sound Synthesis
Animating objects in real-time is mandatory to enable user interaction during motion design. Physically-based models, an excellent paradigm for generating motions that a human user would expect, tend to lack efficiency for complex shapes due to their use of low-level geometry (such as fine meshes). Our goal is therefore two-folds: first, develop efficient physically-based models and collisions processing methods for arbitrary passive objects, by decoupling deformations from the possibly complex, geometric representation; second, study the combination of animation models with geometric responsive shapes, enabling the animation of complex constrained shapes in real-time. The last goal is to start developing coarse to fine animation models for virtual creatures, towards easier authoring of character animation for our work on narrative design.
Interactive paper tearing
In this work, we proposed an efficient method to model paper tearing in the context of interactive modeling . This is illustrated in Figure 4. The method uses geometrical information to automatically detect potential starting points of tears. We further introduce a new hybrid geometrical and physical-based method to compute the trajectory of tears while procedurally synthesizing high resolution details of the tearing path using a texture based approach. The results obtained are compared with real paper and with previous studies on the expected geometric paths of paper that tears.
A Generative Audio-Visual Prosodic Model for Virtual Actors
In this new work , we proposed a method for generating natural speech and facial animation in various attitudes using neutral speech and animation as input. This is illustrated in Figure 5. Given a neutral sentence, we use the phonotactic information to predict prosodic feature contours. The predicted rhythm is used to compute phoneme durations. The expressive speech is synthesized with a vocoder that uses the neutral utterance, predicted rhythm, energy, and voice pitch, and the facial animation parameters are obtained by adding the warped neutral motion to the reconstructed and warped predicted motion contours.
Which prosodic features contribute to the recognition of dramatic attitudes?
In this new work , we explored the capability of audiovisual prosodic features (such as fundamental frequency, head motion or facial expressions) to discriminate among different dramatic attitudes. We extracted the audiovisual parameters from an acted corpus of attitudes and structured them as frame, syllable and sentence-level features. Using Linear Discriminant Analysis classifiers, we showed that prosodic features present a higher discriminating rate at sentence-level. This finding is confirmed by the perceptual evaluation results of audio and/or visual stimuli obtained from the recorded attitudes.