The EVASION team, GRAVIR-IMAG laboratory (UMR 5527), is a joint project between CNRS, INRIA, Institut National Polytechnique de Grenoble (INPG) and University Joseph Fourier (UJF).
The EVASION project addresses the synthesis of images of natural scenes and phenomena. This aim leads us to work jointly on the specification, representation, animation, visualisation and rendering of these scenes. In addition to the high impact of this research on audiovisual applications (3D feature films, special effects, video games), the rising demand for efficient visual simulations in areas such as environmental and medical applications is also addressed. We thus study objects from the animal, mineral and vegetable realm, all being possibly integrated into a complex natural scene. We constantly seek a balance between efficiency and visual realism. This balance depends on the application (e.g., the design of immersive simulators requires real-time, while the synthesis of high quality images may be the primary goal in other applications).
The synthesis of natural scenes has only been very recently studied compared to that of manufacturing environments, due to the difficulty in handling the high complexity of natural objects and phenomena. This complexity can express itself either in the number of elements (e.g. a prairie, hair), in the complexity of the shapes (e.g., many vegetable forms or animal organisms), from motions (e.g. a cloud of smoke, a stream), or from the local appearance of the objects (a lava flow).
To tackle this challenge:
we exploit a prioriknowledge from other sciences as much as possible, in addition to inputs from the real world such as images and videos;
we take a transversal approach with respect to the classical decomposition of Computer Graphics into Modelling, Rendering and Animation: we instead study the modelling, animation and visualisation of a phenomenon in a combined manner;
we reduce computation time by developing alternative representations to traditional geometric models and finite element simulations: hierarchies of simple coupled models instead of a single complex model; multiresolution models and algorithms; adaptive levels of detail;
we take care to keep an intuitive user control;
we validate our results through the comparison of real phenomena, based on perceptual criteria.
Our research strategies are twofold:
Development of fundamental tools, i.e. of new models and algorithms satisfying the conditions above. Indeed, we believe that there are enough similarities between natural objects to factorise our efforts by the design of these generic tools. For instance, whatever their nature, natural objects are subject to physical laws that constrain their motion and deformation, and sometimes their shape (which results from the combined actions of growth and aging processes). This leads us to conduct research in adapted geometric representations, physically-based animation, collision detection, and phenomenological algorithms to simulate growth and aging phenomena. Secondly, the high number of details, sometimes similar at different resolutions, that can be found in natural objects, leads us to the design of specific adaptive or multiresolution models and algorithms. Lastly, being able to efficiently display very complex models and data-sets is required in most of our applications, which leads us to contribute to the visualisation domain.
Validation of these models by their application to specific natural scenes.We cover scenes from the animal (animals in motion, parts of the human body, from internal organs dedicated to medical applications to skin, faces and hair needed for character animation), vegetable (complex vegetable shapes, specific material such as tree barks, animated prairies, meadows and forests) and mineral (lava-flows, mud-flows, avalanches, streams, smoke, cloud) realms.
The fundamental tools we develop and their applications to specific natural scenes are opportunities to enhance our work through collaborations with both industrial partners and scientists from other disciplines (the current collaborations are listed in ). This section briefly reviews our main application domains.
The main industrial applications of the new representations, animation and rendering techniques we develop, in addition to many of the specific models we propose for natural objects, are in the audiovisual domain: a large part of our work is used in joint projects with the special effects industry and/or with video games companies.
Some of the geometric representations we develop, and their efficient physically-based animations, are particularly useful in medical applications involving the modelling and simulation of virtual organs and their use in either surgery planning or interactive pedagogical surgery simulators. All of our applications in this area are developed jointly with medical partners, which is essential both for the specification of the needs and for the validation of results.
Some of our work in the design and rendering of large natural scenes (mud flows, rock flows, glaciers, avalanches, streams, forests, all simulated on a controllable terrain data) lead us to very interesting collaborations with scientists of other disciplines. These disciplines range from biology and environment to geology and mechanics. In particular, we are involved in inter-disciplinary collaborations in the domains of impact studies and simulation of natural risks, where visual communication using realistic rendering is essential for enhancing simulation results.
Some of the new geometrical representations and deformation techniques we develop lead us to design novel interactive modelling systems. This includes for instance applications of implicit surfaces, multiresolution subdivision surfaces, space deformations and physically-based clay models. Some of this work is exploited in contacts and collaborations with the industrial design industry.
Lastly, the new tools we develop in the visualisation domain (multiresolution representations, efficient display for huge data-sets) are exploited in several industrial collaborations; for instance within the energy and drug industries. These applications are dedicated either to the visualisation of simulation results or to the visualisation of huge geometric datasets (an entire power plant, for instance).
Although software development is not among our main objectives, the various projects we are conducting lead us to conduct regular activities in the area, either with specific projects or through the development of general libraries. This section only describes the few softwares we are developing for the public domain.
AnimAL is a C++ library mostly dedicated to animation of 3D models. The kernel of AnimAL is intended to be highly flexible thanks to generic programming (templates). It includes classes representing basic variables (arrays, rotations, solid transforms, linear algebra, etc.) as well as standard algorithms (numerical integration, optimisation, interpolation, etc.). This kernel uses only standard libraries of C++. Thus it is totally independent of the other modules and is directly portable on another architecture or system.
Actually, except the basic tools and the standard algorithms, the existing code in AnimAL is mainly due to the result of our research code (physical mass-spring system, collision detection). We recently proposed its integration into a new modular architecture, supporting input-output of 3D files, internal 3D scene graphs, and graphic user interface which allows the control of the animation and the viewing of the animated 3D models. This architecture is partly based on free softwares developed by the ARTIS team (X3DToolKit and QGLViewer).
AnimAL, in its preliminary form, was already used within four collaborations:
with the LIGIM research laboratory at Lyon. The goal was to model and animate fractured materials.
with the LIFL research laboratory at Lille. Some parts of AnimAL were used in a surgical simulator.
with the University of Lecce (Italy).
with the University of Tuebingen (Germany) for cloth simulation.
Its main ideas are being transfered to the SOFA framework under developpement in collaboration with INRIA-Futurs, INRIA-Sophia and the CIMIT (Boston) which aims at an international open-source platform for surgery simulation.
The OpenGL graphics programming API is widely used by researchers to work with recent graphics hardware. To access hardware-specific functions, it relies on an extension loading mechanism. Due to the large variety of video card models it is very important to be able to check if extensions are available at runtime. Moreover the same program often needs to include different versions of the same rendering code in order to adapt to the set of available extensions. Unfortunately, the loading mechanism provided by OpenGL is different under Linux and Windows platforms, and requires a large amount of very repetitive loading code for each extension (more than 200 extensions are available). gluX is a cross-platform easy-to-use OpenGL extension loader (see http://www-evasion.imag.fr/Ressources). It offers a very simple mechanism for loading and using OpenGL extensions. It allows to detect at runtime if the required extensions are present or not and to select the appropriate rendering code. It is a very convenient tool as it allows to exchange programs without having to handle the painful task of writing the extension loading code for each platform and video card model.
The MobiNet software allows for the creation of simple applications such as video games, virtual physics experiments or pedagogical math illustrations. It relies on an intuitive graphical interface and language which allows the user to program a set of mobile objects (possibly through a network). It is available in public domain for Linux and Windows at http://www-evasion.imag.fr/mobinet/index.en.html. It originated from 4 members of EVASION and ARTIS (including a "monitorat" project). The main aim is pedagogical: MobiNet allows young students at high school level with no programming skills to experiment, with the notions they learn in math and physics by modelising and simulating simple practical problems, and even simple video games. This platform has been massively used during the INPG "engineer weeks" since 2002: 150 senior high school pupils per year, doing a 3 hour practice. This work is partly funded by INPG. Various contacts are currently developed in the educational world.
Besides "engineer weeks", every year a group of "monitors" PhD students conduct an experimentation based on MobiNet with a class in a high school in the frame of the courses (see section ). Moreover, presentation in workshops and institutes are done, and a web site repository is maintained.
The purpose of this research is to modelize and compute several geometrical or topological characteristics on surfaces or in volumes.
The first part of this work is the detection of ``constriction areas'' on a closed surface. We define constrictions as simple closed curves with locally minimal length, and use simple curve and path computation algorithms to construct them. We have developed a new algorithm to compute major constrictions on closed polyhedral surfaces, which overcomes the drawbacks of previous approaches (for example, the computation time has been significantly reduced) using curvature directions and values. This work has been published as a short paper at Eurographics'05.
Applications of this work are widespread, from classification of shapes to object decomposition into simple parts, and to detection of singularities. We currently focus on two main applications: the quantification of aneurysms in volumetric medical images (with François Faure) and the automatic detection of articulations on animal models, to simplify the animation of these models (with Lionel Revéret).
This research is done in collaboration with Stefanie Hahmann from the LMC/IMAG. The aim is to define a representation of surfaces that combines the advantages of subdivision surfaces and NURBS surfaces, for use in CAD/CAM systems, or Animation software. Subdivision surfaces can represent surfaces of arbitrary topology, with the ability to efficiently encode local detail information. But they suffer from the lack of an explicit parametric formulation, which is required by many techniques in CAD/CAM systems, including surface interrogation, trimming, offsetting... On the other hand, NURBS surfaces are not efficient for representing surfaces of arbitrary topology. A new surface model has been developed, that is defined by low degree polynomial patches that connect smoothly with G1-continuity. It has been shown how this model can be hierarchically refined in order to compactly add local detail on the surface. Figure illustrates a dog's head designed with a geometric modelling software based on our new surface model. A paper on this topic has been published in the journal ACM Transactions On Graphics .
Thereafter automatic reconstruction techniques have been developed, that takes as input a dense triangular mesh, and outputs a compact surface representation using our new model. Special care has to be dedicated to the proper parameterization of the dense mesh over the base domain of the reconstructed surface. Figure illustrates the reconstruction of the Max Planck model. Alex Yvart has defended his PhD on this topic in December 2004. The reconstruction method has been published in the conference SMI'05 .
Alex Yvart has defended his PhD thesis December 4, 2004. He is now working in the R& D department of Renault, as a chief engineer.
This work is done in collaboration with Stefanie Hahmann from LMC/IMAG. A collaboration is also taking place on this topic with Prof. Gershon Elber from Technion, in the framework of the Aim@Shape Network of Excellence (see Section ). The purpose of this research is to allow complex nonlinear geometric constraints in a multiresolution geometric modelling environment. Two kinds of constraints have been firstly investigated: constraints of constant area and constant length, both for the modelling of curves.
For the area constraint, a wavelet decomposition of the curve has been used, and the bilinear form corresponding to the area enclosed by the curve has been expressed in this wavelet basis. This enables us to enforce a constant area constraint in real time, even for complex curves with an order of 1000 control points. This work has been published in the journal Computer Aided Geometric Design .
Concerning the constraint of constant length, a multiresolution editing tool for planar curves which allows maintaining a constant length has been developed. One possible application is the modelling of folds and wrinkles. This work has been published in .
Lately, constraints of constant volume for the multiresolution deformation of BSpline tensor-product surfaces as well as subdivision surfaces have been investigated. Fig. illustrates the deformation of a subdivision surface with constant volume.
Basile Sauvage has defended his PhD thesis on December 5, 2005 .
We developed a method based on space deformations for interactively sculpting a shape while keeping its topological genius unchanged: the user interactively sweeps tools that deform space along their path. The objects that overlap with the deformed part of the space are re-meshed in real-time for always being accurately displayed. Our method insures that the resulting deformations are fold over-free, which prevents self-intersections between parts of the deformed shapes. After getting the best paper price in a conference in 2004, a revised version of our paper was published in the journal Graphical Models . This work and its extension to constant volume space deformations (see Figure ) was presented at the Summer School on Interactive shape modeling organized by the network of excellence Aim@shape and in the EUROGRAPHICS'2005 tutorial on Interactive Shape Modelling .
Antoine Bouthors continues his PhD on Cumulus clouds. This year, he worked with a student on dynamic remeshing of implicit surfaces (a paper has been submitted to SMI'06, see fig. ), and he studied statistics of photon-tracing and analytical models.
Participants: Marie-Paule Cani, Jamie Wither
Easily creating 3D models by using sketch-based techniques is attracting more and more attention.
In collaboration with the computer graphics group at IRIT (Toulouse), we created a user-friendly modeling system that enables non-expert users to generae a wide range of free-form shapes from interactive sketching. A skeleton, in the form of a graph of branching polylines and polygons, is first extracted from the user's sketch. The 3D shape is then defined as a convolution surface generated by this skeleton. The subsequent 2D stokes are used to infer new parts of the object, which are combined with the prior ones using smooth CSG operators, the range of blending being adapted to the scale. This work appear in Pacific Graphics .
A new PhD student, Jamie Wither, funded by the European project visitor, joined the group in november to work on sketch and annotation techniques for the design and animation of natural scenes.
Animation of a 3D model is usually made using a hierarchical representation of its articulations called the animation skeleton. Creation of this animation skeleton is a laborious task since it is made by hand. We aim at simplifying this task, using geometrical information on the model to first compute a geometric skeleton, which later can be converted into an animation skeleton.
The first part of this work is thus to compute a geometric skeleton containing relevant information and which can easily be converted into an animation skeleton. To do so, we focus on Reeb graphs, because such skeletons are simple edge-vertex graphs, their structure is closely related to the model's geometry and topology, and their computation is fast. This work has started this summer, with the test several kinds of Reeb graphs, and how they can be modified.
We developed a parameterization-free texture representation capable of defining high resolution details on surfaces without distortion (see figure ). It draws on octree-textures (surface colour data is stored in a volumetric octree) and sprites (small repeated images relying on a set of reference patterns): our representation stores the location and orientation of the pattern instances in the octree.
Our goal is to make the model GPU-compliant, which means that the data structures lie on the graphics board: It must be dynamically updatable from the CPU by an application (e.g. a painting tool). At run time, our algorithm implemented as a dedicated fragment shadermust be able to recover the texture colour for any given pixel.
This work has been published at I3D'05 . A detailed partial version has been published as a chapter in the book GPU-Gems II . Sylvain Lefebvre has defended his PhD in April, 2005.
We collaborated with ARTIS on the developpement of a new method for the rendering of hard shadows cast by a point light source. It takes advantage of triangle strips and fast culling capabilities of graphics hardware not available to conventional robust methods like Z-fail (see figure ).
This project is part of a collaboration with IDAV ( http://www.idav.ucdavis.edu/). A high-level approach to describe the characteristics of a surface is to segment it into regions of uniform curvature behavior and construct an abstract representation given by a (topology) graph. We propose a surface segmentation method based on discrete mean and Gaussian curvature estimates. The surfaces are obtained from three-dimensional imaging data sets by isosurface extraction after data presmoothing and postprocessing the isosurfaces by a surface-growing algorithm. We generate a hierarchical multiresolution representation of the isosurface. Segmentation and graph generation algorithms can be performed at various levels of detail. At a coarse level of detail, the algorithm detects the main features of the surface. This low-resolution description is used to determine constraints for the segmentation and graph generation at the higher resolutions. We have applied our methods to MRI data sets of human brains. The hierarchical segmentation framework can be used for brain-mapping purposes. These results have been published in .
This project is part of a collaboration with CEA/CESTA. CEA/CESTA has to perform numerical simulation on very large data sets, in thermodynamics, mechanics, aerodynamics, neutronics, etc. Visualization of the results of these simulations is crucial in order to gain understanding of the phenomena that are simulated. The visualization techniques need to be interactive - if not real time, to be helpful for engineers. Therefore multiresolution techniques are required to accelerate the visual exploration of the data sets. We are developing multiresolution algorithms devoted to specific type of data sets. Our current focus is on volumetric data sets based on tetrahedral grids in which inner structures of dimension 2, 1 or 0 must be preserved, both geometrically and topologically. To maintain these important features during the multiresolution decomposition, techniques based on combinatorial topology have been developed. The first results concern the preservation of 1D and 0D structures in triangular grids. Figure illustrates the preservation of polylines (in red) and specific points (in yellow) in a CAD/CAM mesh. The generalization of these results to the volumetric case has been developed, and is currently submitted for publication.
The results in 2D have been published in the journal The Visual Computer . Fabien Vivodtzev has defended his PhD thesis on December 5th .
This project is part of a collaboration with the research and development department of EDF, and with LPPA (Laboratoire de Physiologie de la Perception et de l'Action, Collège de France). The general context is similar to the collaboration with CEA (Section ), i.e. the visualisation of large numerical data sets. The focus in this project in on the following problem: How should human perception be taken into account in Visualization algorithms, and more specifically in algorithms based on multiresolution techniques. Previous works in this area are mostly based on image analysis techniques, that are used to measure important features in a static image resulting from some visualization algorithm. These results do not take into account information on the specific person using the visualization system. We are especially interested in taking into account such information, like for example the point where the user is looking at. Also we want to insert dynamic parameters in the perceptive measure, like the movement of the user's head, since such parameters greatly influence the actual perception of the rendered scene. In the framework of this collaboration, EDF is funding a PhD grant on these topics, started by Christian Boucheny in December 2005.
During his Master Thesis, Guillaume Gilet (advised by Alexandre Meyer and Fabrice Neyret) has developed an adaptive model of surfels(a point based representation): the size of points (i.e. discs) depends on the distance and on the visibility, and represent a set of leaves (these sets are organized hierarchically in that purpose). Moreover, surfels commute to classical meshes for close points of view. This allows for the interactive rendering of forests with both close and distant trees, and continuous flyovers of entire forests (see Figure ).
In 2001, we developed a model for representing the features of the flow in animated rivers, based on quasi-stationary shock-waves and ripples. This allows us to obtain very precise features, with very compact data (vector representation of features, no data where no features are present).
The purpose of this work, done by Frank Rochet during his Master thesis , is to render the water surface both efficiently and with thin details based on this vector representation of features. For this, we developped two representations of the water surface: one geometry-based for close point of views, and one bump-map based for distant point of views. In both cases, we generated geometric strips as a support to shock waves. Upon these, the main wave and ripples are represented with features-aligned polygons in the first case, and with a high-field profile (i.e. a 1D-texture, used to generate normals maps) in the second case. Only features which are visible in the view frustum are generated. The difficult issues are the transition between the two models, and the intersection of primitives.
With this model, we were able to obtain the real-time high resolution rendering of an animated river, which is totally impractical with ordinary grid-based methods, as illustrated in figure .
Self shadows are particularly important to get the adequate impression of volume for complex natural objects such as hair (see Figure ). We just developed an efficient self shadowing method (submitted for publication) particularly well adapted to the rendering of animated objects, since it requires no geometry-based pre-computation. Our method is based on a 3D light-oriented density map, a novel structure that combines an optimized volumetric representation of hair with a light-oriented partition of space. Using this 3D map, accurate hair self-shadowing can be interactively processed (several frames per second for a full hairstyle) on a standard CPU. Beyond the fact that our application is independent of any graphics hardware (and thus portable), it can easily be parallelized for better performance. A parallel implementation makes the method run in real-time.
Adding details to an animation while avoiding the heavy cost of a full physically-based simulation can be done through a procedural approach. In 2004, we proposed a method of that kind for generating skin and cloth wrinkles on top of a standard character animation. These wrinkles are now combined with a simple dynamic layer that generates the vibrations for the flesh in real time : The dynamic effects are added through a double skinning operation, that blens the current flesh volume computed through standard smooth skinning with its position in a /dynamic frame/ attached to the skeleton via a visco-elasstic element. A set of weights, that can be automatically computed, taking into account the morphology of the limb, controls the local behavior of tissues. The resulting real-time technique is well suited to video games or any application where we need to add dynamic effects, at almost no cost, to an existing animation sequence (see Figure ).
Efficiently animating virtual clay is a challenge, since neither optimisations proposed for solids (and based on a constant topology) nor for fluids (since there is a moving limit surface) are directly applicable. In 2004, we proposed the first real-time model for this material based on a layered approach. Three sub-models respectively handling large-scale deformations, local matter displacements, and surface tension, cooperate over time for providing the desired behaviour. Our model handles an arbitrary number of tools that simultaneously interact with the clay. This makes it usable for direct hand manipulation, which is the last step in Guillaume Dewaele's thesis, defended on december the 7th, 2005 : the user's hand motion is video captured and used to control a virtual hand, serving as a multiple tool for editing the clay.
The aim of this research is to develop novel methods for the represention and physical simulation of variations in highly deformable mesh structures for real-time animation (e.g. from the simulation of cloth to virtual surgery) through a level of detail (lod) topological approach. The research currently concentrates on the use of two primary hierarchical data structures, these being the octree and quadtree, for managing the changing multiresolution detail as well as the physical property details of mesh structures such as the likes of cloth during simulation (see Figure ).
One of the primary ideas of this research is to be able to effectively use either data structure not only as a possible multiresolution paradigm for on the fly lod mesh generation during animation but also to integrate such data structures directly within the manipulation and management of the physical properties of the mesh as well. For example, collision detection could also simultaneously be handled directly through the same data structure which manages the current local mesh resolution, thus negating the neccessity of a secondary data structure for such a task.
We address the question of simulating highly deformable objects in real-time, such as human tissues or cloth. The main problem is to detect and handle multiple (self-)collisions within the bodies. We have developed a new approach for collision detection, based on a pool of "active pairs" of geometric primitives. These pairs are randomly chosen, and they iteratively converge to a local distance minimum or to a pair of colliding elements. Managing the size of the pool allows us to tune the computation time devoted to collision detection. Temporal coherence is obtained by reusing the interesting pairs from one step to another. We have participated to several state-of-the-art reports , .
This year we have focused on the robustness of collision handling in case of highly stiff objects, and come up with a new formulation of the dynamics of stiff elastic bodies. This new approach is based on an equation system which gathers the implicit integration equation and the contact constraints modeled using Lagrange multipliers. The contact constraints take into account the etablised contacts as well as the anticipated collisions computed using continuous collision detection. The contact constraints are actually modeled as inequalities and the problem can be formulated as a generic quadratic programing (QP) problem. We are currently investigating solutions of this QP to meet real-time constraints.
We continue a collaboration on surgical simulation with laboratory TIMC through a co-advised Ph.D. thesis. The purpose is to develop new models of finite elements for the interactive physically-based animation of human tissue. A new model of tetrahedron-based finite elements has been proposed (see fig. ) and compared with other approches (see fig. ). Its main feature is to remain physically plausible even when large displacements and large deformations occur (see fig. ), while being almost as computationally efficient as a linear finite elements method.
Realistically predicting the shape of hair requires an accurate mechanical model that takes into account the mechanical properties of inextensible, naturally curled hair strands. In the framework of our collaboration with the industrial partner L'Oréal , we developped a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions . The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets (see Figure ). The method can generate different hair types with very few input parameters and be used to perform virtual hairdressing operations such as wetting, cutting and drying hair. We are currently working on its extension to the dynamic animation of hair in motion.
The approach consists in considering fluids in vorticity domain and representing vorticity as 1D filaments (linked particles). This is motivated by the fact that in numerous situations, non-zero vorticity in fluids concentrates in compact features which tend to be curves (e.g. smoke ring, tornado...). This allows for a huge compression of data and calculations. Moreover, these "vortical objects" are directly related to visual features (atomic mushroom, billowing...), which is very interesting in the scope of letting an artist tuning and controling a fluid. The Eulerian aspect yieds high precision on locations and avoids numerical dissipation. Associated to the fact that the background fluid between features has zero-vorticity, it allows as to simulate an open environment in which only curves associated whith turbulent features are stored. Finally, considering 1D-particles (curves) permits a correct account for vortex streching. Since it is very compact (it corresponds to a vectorial description on the current state of the fluid!), it allows efficient calculation and convenient manipulation in a modeler (much like ordinary keyframing and interpolation of geometric curves. Indeed, it is even feasable to store the entire vectorial animation for cheap). This work illustrated in figure has led to a publication at I3D .
This research project aims at capturing motion from the automatic processing of video to provide information for 3D animation of characters, such as humans or animals. Unlike several approaches in computer vision research, the goal is not to recognize activities, but rather to acquire robust geometric hints to control animation. Three main projects are currently under investigation: facial animation, deformation of skin surface and motion of animals.
In several domains of character animation, footsteps are one of the most important constraints. It guarantees one of the main aspects of a realistic animation of locomotion. This task, when done manually, is even more complex for quadrupeds. Being able to automatically predict the footsteps information from a video footage is thus an important contribution. The method developed is based on the design of a dedicated image filter to detect the pattern of animal legs. Along the time range of the video, the positive filter responses are clustered so that a single trajectory point is given per leg. As 2D images are considered (profile view), there exist ambiguities in the prediction of each individual foot position when side views of legs are crossing each other (typically left and right side of the animal, and front and back legs for higher velocities). A motion model has been developed to take into account this problem. This work has been done in collaboration of the University of Washington, in Seatle, USA.
Skeletons are at the core of 3D character animation. The goal of this work is to design a morphable model of 3D skeleton for four footed animals, controlled by a few intuitive parameters. This model enables the automatic generation of an animation skeleton, ready for character rigging, from a few simple measurements performed on the mesh of the quadruped to animate (see fig. ). Quadruped animals - usually mammals - share similar anatomical structures, but only a skilled animator can easily translate them into a simple skeleton convenient for animation. Our approach for constructing the morphable model thus builds on the statistical learning of reference skeletons designed by an expert animator. This raises the problems of coping with data that includes both translations and rotations, and of avoiding the accumulation of errors due to its hierarchical structure. Our solution relies on a quaternion representation for rotations and the use of a global frame for expressing the skeleton data. We then explore the dimensionality of the space of quadruped skeletons, which yields the extraction of three intuitive parameters for the morphable model, easily measurable on any 3D mesh of a quadruped. We evaluate our method by comparing the predicted skeletons with user-defined ones on one animal example that was not included into the learning database. We finally demonstrate the usability of the morphable skeleton model for animation. This work has been published at the EG/SIGGRAPH Symposium on Computer Animation, 2005 .
The motion of animals is still a challenging problem in 3D animation, both for articulated motion and deformation of the skin and fur (see Figure ). The goal of this project is to acquire information from the numerous video footage of wild animals. These animals are impossible to capture into a standard framework of motion capture with markers. There are several challenges in the usage of such video footage for 3D motion capture : only one 2D view is available, important changes occur in lighting, contrast is low between the animal and foreground, etc. Currently, a method has been developed to first extract a binary silhouette of the animals and then, to map this silhouette to pre-existing 3D models of animals and motion thanks to a statistical prediction. This work has been selected as one of the best paper of the Symposium on Computer Animation 2004 (SCA'04) and an extended version has been published to the Graphical Models Journal .
Techniques for rendering of the skin surface have now achieved a highly realistic level. However, human subject are highly trained to perceive other faces and require 3D animation of faces to be accurate in terms of amplitude and timing of motion of facial features to be believable. We are conducting experiments with experimental psychologists to evaluate the naturalness of the synthetic control of facial features motion, using an exhaustive search over the tunings of the control parameters. Two rendering techniques are considered: a 3D geometrical modelling of the face including texture map (in collaboration with David Sander, from Experimental Department of the University of Geneva), and a 2D image-based approach using re-synthesis of video of real faces (in collaboration with Edouard Gentaz, from UFR de psychologie expérimentale, Université Pierre Mendès-France, Grenoble).
The above sections presented our research in terms of fundamental tools, models and algorithms. A complementary point of view is to describe it in terms of application domains. The following sections describe our contribution to each of these domains, with reference to the tools we relied on if they were already presented above.
Several of the tools we are developing are devoted to a new generation of interactive modelling systems:
The multiresolution subdivision surfaces presented in section have been used for interactive multiresolution modelling.
The space deformations developed by our external collaborator Alexis Angelidis (see section ), and in particular the extension to constant volume deformations are used for intuitive geometric editing of shapes of a constant topological genius.
The real-time physically-based model for virtual clay presented in section is dedicated to a sculpting system as close as possible to interaction with real clay: in the context of Guillaume Dewaele's thesis, co-advised by Radu Horaud from the MOVI group, the virtual clay model is currently being combined with a vision interface for capturing the motion of the user's hands. So our clay model will be directly sculpted by fingers, making it usable for any artist, or even as an educational tool for small children.
The diverse fundamental tools we are developing can be combined to allow the large scale specification, efficient rendering and animation of vegetation (prairies, trees, forest, etc). The specification of complete natural sceneries is one of the aims of the Dereve II project (see section ).
Some of our work on geometric modelling and physically-based animation has been successfully applied to the medical domain:
Our tools for efficient physically-based simulation, and in particular our new contributions to collision detection and response (see section ), is being used in a new European medical project called Odysseus(see section ).
Furthermore, Mathieu Nesme's PhD research (see section ), which is co-advised by Yohan Payam of laboratory TIMC, concentrates on the development of improved models for human tissue simulation for surgical simulations.
Several of our new models and algorithms contribute to the animation of virtual creatures. This includes our work on motion capture from video (general body motion, faces, and body deformations, see section ); the procedural method we developed for adding skin details (see section ); the physically-based animation tools (sections and ) that we are currently applying to the simulation of virtual garments; and our adaptive animation algorithm for efficiently computing hair motion (see ).
Except for the extraction of an animal's global motion from video, all of these contributions are developed within projects with industrial partners (see Virtual Actors RIAM project section and RNTL PARI project section ).
A first work towards the perceptive evaluation of animation has been achieved in collaboration with the dept. of Psychology of the U. of Geneva for facial animation. A study has been made to evaluate what different parts of the brain are activated when a picture of an expressive face is showed to a subject, with gaze pointing towards the subject or not. It has been necessary to adapt a 3D model to standard photographs of expressive faces, so that the eye orientation on the photographs could be accurately controlled in a realistic manner.
Based on a contract with l'Oreal, we have been working for more than two years with their research laboratories. The goal is to develop methods able to predict the shape and motion of hair as a function of its physical parameters. The long-term application is to prototype hairstyle processing, taking into account the cosmetic products used.
The goal of this collaboration with company Galilea is to develop tools for easily including physically-based animation in video games, especially for more realistic humanoids. We transfered technology on physically-based deformation models, fast collision detection, and a novel approach of 3D cloth design. This contract terminated this year. However Galilea has not been able to exploit the technology due to bankrupt.
In the context of a RIAM contract started in June 2003 for 18 months with Galilea and LEIBNIZ/INPG, new technological tools are currently being developed for the creation of believable 3D characters for video games. EVASION is involved in two main projects : facial animation for talking characters and real-time animation of hair.
We are still in close contact with the ATI and Nvidia development teams providing suggestions and bug reports, and testing prototype boards.
The licences granted to EVASION by Alias for their 3D modeling and animation software Maya in the context of a research agreement is still retained for 2005. Maya is still currently used to edit and visualise animations of animal motion and facial animation. Another project explores the use of Maya as a graphical front end for a 3D modeling tool from hand gestures. In addition, we are now also evaluating the MotionBuilder software for real-time processing of 3D character animation and SketchbookPro as an input for project on sketch-based modelling.
This is a regional project in the domain of Computer Graphics started in July 2003 for one year, where EVASION has been involved on two points. First, on the morphing of textured models: collaboration between Marie-Paule Cani and Eric Galin (LIRIS). Second, on the modelling and rendering of the Alpine landscapes: collaboration between Alexandre Meyer, Eric Tosan (LIRIS) and Xavier Marsault (Ecole d'Architecture de Lyon).
We drive the MIDAS (Modèles Interactifs Déformables pour l'Aide à la Surgétique) project, which also involves laboratories TIMC and ICP, form 2005 to 2007. The goal of this project is to provide the biomechanics community with physically-based deformable models fast enough for use in per-operative planning, or in the context of trial-and-error parameter tuning where a large number of simulation are performed.
Current techniques for animating the skin surface of a virtual creature from the motion of its skeleton do not take into account complex phenomena such as the rolling of the internal tissue over the bones. In order to tackle this problem, a research project has been initiated between EVASION and the National Museum of Natural History in Paris. For the study of locomotion, the Museum has access to Xray video of live animals, making possible to visualize the motion of the internal skeleton during locomotion. Using standard stereovision techniques, the 3D surface of the animal will be extracted in order to be correlated into an innovative machine learning framework with the internal Xray data of the skeleton. As a result of this learning phase, more realistic 3D motion of the skin will be achieved, controlled by standard 3D skeleton motion. This project is sponsored by an ANR grant for the next three years. It gathers 4 participants: EVASION project, National Museum of Natural History, Université de Rennes et Université Paris 5.
This project aims at developping new techniques and hybrid models to model, visualize, animate and transmit natural scenes. It involves EVASION, IRIT and LABRI.
This project (from October 2003 to October 2005) is an initiative from C. Pelachaud (U. Paris VIII), S. Donikian (SIAMES project, IRISA) and J.P. Jessel (IRIT) to gather state-of-the-art and perspective on the modelling and animation of 3D character animation among the French research community. In particular, the members of EVASION are participating in two groups : Modelling of the human body, lead by S. Akkouche (LIRIS), and Conversation Agents, lead by C. Pelachaud (U. Paris VIII).
The partners for this project are the E-MOTION group at GRAVIR and the SIAMES group at IRISA (Rennes). The goal of this project is to model the behavior of autonomous characters in complex environments. Our contribution is to model a vegetation scene and to simulate uneven ground with local collapses under the feet of the characters. The results were presented at the Journées Robea held in March.
The mission of AIM@SHAPE is to advance research in the direction of semantic-based shape representations and semantic-oriented tools to acquire, build, transmit, and process shapes with their associated knowledge. We foresee a new generation of shapes in which knowledge is explicitly represented and, therefore, can be retrieved, processed, shared, and exploited to construct new knowledge. This Network of Excellence started in December 2003. This year Georges-Pierre Bonneau and Marie-Paule Cani have actively collaborated in the publication of STAR reports on the topics covered by the Network. Marie-Paule Cani has co-organized with Marc Alexa a summer school on Interactive Shape Modeling, held in Darmstadt in july 2005: http://www.interactiveshapemodeling.net/.
Odysseus is a European EUREKA project on the simulation of laparoscopic surgery, running from 2004 to 2007. Driven by IRCAD, it involves two industrial partners (Karl Storz, SimSurgery) and three research projects of INRIA: EVASION, EPIDAURE, ALCOVE. The overall project is to develop commercial products for collaborative diagnosis and patient-specific planning. Our participation is related to the planning and real-time simulation of surgery using patient-specific data.
Marie-Paule Cani contributed to the creation of a French chapter for the Eurographics association in July 2003. She has been the president of this chapter since then. The third event of the French chapter is the conference "Rencontres Francophones d'Informatique Graphique" held in Strasbourg in November 2005, jointly with AFIG (Association Française d'Informatique Graphique).
Program Committees:
Marie-Paule Cani served as ``Conference chair'' for IEEE Shape Modelling International 2005, held at the MIT in Boston in june 2005. She was a program committee member for SIGGRAPH 2005, for the ``ACM Symposium on Computer Animation'' (SCA'05, Los Angeles, USA), for the EUROGRAPHICS workshops on Natural Phenomena and on Sketched-based Interfaces (Dublin, Ireland, Sept 2005), and for the CAD/Graphics conference (Hong-Kong in December 2005). She is paper co-chair of the upcoming conference SCA'06.
Georges-Pierre Bonneau was a program committee member for for SMI'05 (Massachusetts, USA, 13-17 June 2005), and IEEE Visualization'05 (Minneapolis, USA, Oct. 23-28 2005). He will be program committee member for Shape Modeling International 2006 (Sendai, Japan, 14-16th June 2006), Eurographics Symposium on Visualization 2006 (Lisbon, Portugal, May 8-10, 2006) and Eurographics Workshop on Parallel Graphics and Visualization (Braga, Portugal, May 11-12th, 2006).
François Faure served on the international program committee for ``Computer Animation and Social Agents'' (CASA'05, Hong Kong) and ``Workshop in Virtual Reality Interactions and Physical Simulations'' (VRIPHYS'05, Colima, Mexico).
Fabrice Neyret was a program committee member for "ACM Symposium on Computer Animation" (SCA'05, Los Angeles) and for "Eurographics Workshop on Natural Phenomena'05" (EWNP'05, Dublin)
Editorial boards:
Marie-Paule Cani serves in the editorial board of the journal Graphical Models (academic press); Georges-Pierre Bonneau serves in the editorial board of the IEEE TVCG journal.
The EVASION team members have also given several invited talks in addition to their involvement in the many aforementioned conferences and workshops during the year. Marie-Paule Cani gave an invited talk at the University of Berkeley, California, in August 2005.
Lionel Reveret has been invited to present his works on 3D animation of animals to a Symposium dedicated to Etienne-Jules Marey at the College de France in Paris.
Marie-Paule Cani and Fabrice Neyret are co-writers of a chapter on natural scenes in the ``Traité de la réalité virtuelle'', to be published in 2006 ;
François Faure is co-writer of a chapter on physically-based animation in the ``Traité de la réalité virtuelle'', to be published in 2006 ;
Sylvain Lefebvre and Fabrice Neyret are co-writers of a chapter in ``GPU Gems-2'' .
Active participation of the team to the national day of science "Science en Fête" with a dedicated virtual reality demo (monitorat project shared by several teams), and video demonstration presenting an overview of our work to a large audience.
Marie-Paule Cani gave an invited talk at Intersculpt, the bi-anual conference on numerical sculpting (Nancy, November 2005). This conference, which gathers scientists with artists, designers and people from the 3D printing industry, is opened to the public.
The 'MobiNet' team (Joelle Thollot (at Artis) Fabrice Neyret and Franck Hétroy (at Evasion) plus a dozen of temporary assistants) organizes 8 practices per year on a half-day basis for about 150 senior high school students in the scope of INPG "engineer weeks". The purpose is to give a more intuitive practice of math and physics, and to give insights on programming and engineering. See and http://www-evasion.imag.fr/mobinet/index.en.html.
In addition to "engineer weeks", every year a group of "monitors" PhD students conduct an experimentation based on MobiNet with a high school class in the frame of the courses. This year, 3 students advised by Fabrice Neyret prepared class exercices for a math class in collaboration with the teacher on the topic of vectors (reports and booklets available on MobiNet web site: http://www-evasion.imag.fr/mobinet/index.en.html).
Moreover, Fabrice NEYRET presents the tool and the experiment in various workshop and institutes ("journees Greco", conference "le Gout des Sciences", IUFM, ...). He also maintains a web site repository.
Fabrice Neyret takes part of various publics operations:
He co-animates some operations:
He is in the leading staff of "Observatoire Zetetique" ( http://www.observatoire-zetetique.org/), an organisation which aims at promoting intelligent and scientific approach especially facing paranormal or pseudo-scientist thesis, by organising test protocols, enquiries, publications and broadcasts, public conferences, formations.
He is in the leading staff of the ``Cafés Sciences et Citoyen'' animation team (Grenoble), funded by the communication department of CNRS, and he maintains the web site http://www-evasion.imag.fr/cafesSC/. Conferences are organised on a monthly basis.
He advices a monitor group for an exhibition-demo "Interacting in a virtual Environment" prepared for the Fete de la Science.
He maintains web sites for various operations, plus some more related on research topics.
He participates as an expert to various operations:
expert in a public debate "does TV tell the truth ?" as a specialist of image at St Martin d'Heres(38) in March,2005.
expert in a public debate "digital images" at Roman(38) in March,2005.
public presentation "manipulations by images" at Grenoble in May,2005.
expert in a trans-disciplinar debate about "trees" in Meylan, 2005.
He participates to broadcasts or journalist interviews:
a dayly 2' broadcast on France Bleue Isere about "3D" during a week (twice a day) in Oct,2005.
in Dauphiné Liberé (local newspaper) and France Bleu Isère (radio broadcasting) after the debate on "digital images" in March, 2005.
In addition to the regular teaching activities (UJF, INPG) of the faculty members, several researchers at EVASION taught some courses to the "Image, Vision, Robotics" Master Research, the "Mathematic Engineering" Master and to the 3rd year "Image and Virtual Reality" of ENSIMAG. François Faure gave a course on animation at the Vienna University (Austria) during 2 weeks (22 hours).