EN FR
EN FR


Section: New Results

Models for Motion Synthesis

  • Scientist in charge: François Faure.

  • Other permanent researchers: Marie-Paule Cani, Damien Rohmer, Rémi Ronfard.

Animating objects in real-time is mandatory to enable user interaction during motion design. Physically-based models, an excellent paradigm for generating motions that a human user would expect, tend to lack efficiency for complex shapes due to their use of low-level geometry (such as fine meshes). Our goal is therefore two-folds: first, develop efficient physically-based models and collision processing methods for arbitrary passive objects, by decoupling deformations from the possibly complex, geometric representation; second, study the combination of animation models with geometric responsive shapes, enabling the animation of complex constrained shapes in real-time. The last goal is to start developing coarse to fine animation models for virtual creatures, towards easier authoring of character animation for our work on narrative design.

Real-time physically-based models

Participants : Armelle Bauer, Ali Hamadi Dicko, François Faure, Matthieu Nesme.

Figure 7. Illustration from [14] (Left) showing our velocity based adaptive simulation, and from [21] (Right) on interactive visualization of muscle activity.
IMG/physically_based.png

Following the success of frame-based elastic models (Siggraph 2011), a real-time animation framework provided in SOFA and currently used in many of our applications with external partners, we further improved this year the efficiency of this approach: we developed an adaptive version of frame-based elastic models, where frames get seamlessly attached to other ones during deformations when appropriate, in order to reduce computations [14] , [33] .

Frame-based models were successfully used to model limb movements in anatomical modeling [21] . The efficiency of this mehod enables us to advance towards the concept of a Living book of anatomy, where users move their own body and observe it through a tablet to get some visual illustration of anatomy in motion (see Figure 7 ).

Specific models for virtual creatures

Participants : Marie-Paule Cani, Michael Gleicher.

Figure 8. Illustration from [16] showing our results on dynamic skin deformations.
IMG/responsive.png

In collaboration with Loic Barthe and Rodolphe Vaillant from IRIT (U. Toulouse), Brian Wyvill (U. Victoria) and Gael Guennebaud (Manao, Inria), we developed a new automatic method for character skinning: Based on the approximation of character limbs with Hermite RBF implicit volumes, we adjust the mesh vertices representing the skin by projecting them back, at each animation step, to their iso-surface of interest. Since the vertices start from their previous position at the last animation step, there is no need of specifying skinning weights and using another skinning method as pre-computation, as in our previous implicit skinning method. Our solution avoids the well known blending artifacts of linear blend skinning and of dual quaternions, accommodates extreme blending angles and captures elastic effect in skin deformation [16] .

This year, we also studied the way character eyes and gazes are to be animated. This extensive study resulted into a state if the art report published at the Eurographics conference [32]