The EVASION project-team, LJK-IMAG laboratory (UMR 5224), is a joint project between CNRS, INRIA, Institut National Polytechnique de Grenoble (INPG), Université Joseph Fourier (UJF) and Université Pierre-Mendès-France (UPMF).
The EVASION project addresses the modeling, animation, visualization and rendering of natural scenes and phenomena. In addition to the high impact of this research on audiovisual applications (3D feature films, special effects, video games), the rising demand for efficient visual simulations in areas such as environment and medicine is also addressed. We thus study objects from the animal, mineral and vegetal realms, all being possibly integrated into a complex natural scene. We constantly seek a balance between efficiency and visual realism. This balance depends on the application (e.g., the design of immersive simulators requires real-time, while the synthesis of high-quality images may be the primary goal in other applications).
From its creation, EVASION mostly tackled the modeling, animation, visualization and rendering of isolated natural objects or phenomena. A very challenging long term goal, that may be not reachable within the next few years but towards which we should tend, would be to simulate full, complex natural scenes combining many elements of different nature. This would enable us to test our algorithms on real-size data and to achieve new applications such as the interactive exploration of a complex, heterogeneous data set from simulation, or of a visually credible natural scene. Being able to animate this scene during exploration and to interact with the simulation taking place would be very interesting. The three objectives below set up several milestones towards this long term goal.
Natural scenes present a multitude of similar details, which are never identical and obey specific physical and space repartition laws. Modeling these scenes is thus particularly difficult: it would take years for a designer, and is not easy to do either with a computer. Moreover, interfaces enabling intuitive and fast user control should be provided. Lastly, explicitly storing the information for every detail in a landscape is obviously not possible: procedural models for generating data on the fly, controlled by mid or high-level parameters, thus have to be developed. Our first objective for the next few years is therefore to develop novel methods for specifying a natural scene. This includes modeling the geometry of individual elements, their local appearance, positioning them within a full scene and controlling motion parameters. More precisely, we will investigate:
New representations and deformation techniques for intuitive shape modeling.
The exploitation of sketching, annotation and analysis of real-data from video, 3D scanners and other devices for the synthesis and animation of natural scenes.
The procedural synthesis of geometry, motion and local appearance (texture, shaders) using existing knowledge, user input and/or statistical data.
Most natural scenes are in motion. However, many of the animated phenomena that we can observe in nature have never been realistically, yet efficiently simulated in Computer Graphics. Our approach for tackling this problem is to increase and deepen our collaborations with scientist from other disciplines. From our past experiences, we believe that such interdisciplinary collaborations are very beneficial for both parties: they provide us with a better understanding of the phenomena to model and help us to get some input data and to experiment with the most recent models. On the other hand, our partners get interactive virtual prototypes that help them testing different hypothesis and enable a visual appreciation of their results. In particular, our aims are to:
Improve interactive animation techniques for all kinds of physical models.
Develop models for new individual phenomena.
Work on the interaction between phenomena of different nature, such as forest and wind, sand and water, erosion (wind, water and landscape) or even eco-systems (soil, water, plants and animals).
Being able to handle massive data sets has been a strategic objective for French Computer Science for the last few years. In our research field, this leads us to investigate both the scientific visualization of very large data sets (which helps exploring and understanding the data provided, for instance, by our scientific partners from other research fields), and the real-time, realistic rendering of large size natural scenes, seeking for the interactive exploration and possible immersion in such scenes as a long term goal. More precisely, our objectives are to develop:
Novel methods for the interactive visualization of complex, hybrid massive data sets, possibly embedding 1D and 2D structures within volumetric data which may represent scalar, vector or tensor fields.
Perception based criteria for switching between levels of detail or between the representations of different nature we use in multi-models.
Real-time techniques enabling us to achieve the rendering of full, natural scenes by exploring new, non-polygonal representations and relying on the programmable graphics hardware whereas possible.
Selected highligths:
On December 6th 2007, Marie-Paule Cani received from Valérie Pécresse, the French minister of Graduate Studies and Research, the Irène Joliot-Curie awardin the “mentor” category. The later is awarded each year to a person or to a moral entity whose action has been an essential help to one or several young female researchers at the beginning of their scientific carrier.
The Grimage immersive platform, developed by PERCEPTION, MOAIS and EVASION research project-teams at Grenoble - Rhône-Alpes, has been selected for a demonstration at the SIGGRAPH 2007 Emerging Technologies exhibition, August 5-9, 2007. Grimage glues multi-camera 3D modeling, physical simulation and parallel execution for a new immersive experience, as illustrated in figure . The goal of Grimage is to associate computer vision, physical simulation and parallelism to move one step towards the next generation of virtual applications. 3D geometries and photometric data are extracted from a set of calibrated cameras without requiring intrusive markers. Parallel computations enable to reach interactive executions. This demo was definitely one of the most popular of the show.
Florence Bertails received the SPECIF PhD Awardin January 2007, during the SPECIF annual meeting in Bordeaux. The aim of the SPECIF Award is to honour each year an excellent PhD work in computer science, and to promote young researchers of the field. Florence's PhD thesis, entitled "Simulation of virtual hair" was prepared in EVASION project-team, under the supervision of Marie-Paule Cani (INPG) and Basile Audoly (CNRS, LMM-Univ. Pierre et Marie Curie Paris VI), and was defended in June 2006 at Institut National Polytechnique de Grenoble. Her work was published at SIGGRAPH 2006 and presented in a SIGGRAPH course on hair simulation in 2007.
The synthesis of natural scenes has been studied long after that of manufacturing environments in Computer Graphics, due to the difficulty in handling the high complexity of natural objects and phenomena. This complexity can express itself either in the number of elements (e.g., a prairie, hair), in the complexity of the shapes (e.g., some vegetal or animal organisms) and of their deformations (a cloud of smoke), from motions (e.g., a running animal, a stream), or from the local appearance of the objects (a lava flow). To tackle this challenge:
we exploit a prioriknowledge from other sciences as much as possible, in addition to inputs from the real world such as images and videos;
we take a transversal approach with respect to the classical decomposition of Computer Graphics into Modeling, Rendering and Animation: we instead study the modeling, animation and visualization of a phenomenon in a combined manner;
we reduce computation time by developing alternative representations to traditional geometric models and finite element simulations: hierarchies of simple coupled models instead of a single complex model; multi-resolution models and algorithms; adaptive levels of detail;
we take care to keep the user in the loop (by developing interactive techniques whereas possible) and to provide him/her with intuitive control;
we validate our results through the comparison with the real phenomena, based on perceptual criteria.
Our research strategy is twofold:
Development of fundamental tools, i.e., of new models and algorithms satisfying the conditions above. Indeed, we believe that there are enough similarities between natural objects to factorize our efforts by the design of these generic tools. For instance, whatever their nature, natural objects are subject to physical laws that constrain their motion and deformation, and sometimes their shape (which results from the combined actions of growth and aging processes). This leads us to conduct research in adapted geometric representations, physically-based animation, collision detection and phenomenological algorithms to simulate growth or aging. Secondly, the high number of details, sometimes similar at different resolutions, which can be found in natural objects, leads us to the design of specific adaptive or multi-resolution models and algorithms. Lastly, being able to efficiently display very complex models and data-sets is required in most of our applications, which leads us to contribute to the visualization domain.
Validation of these models by their application to specific natural scenes. We cover scenes from the animal realm (animals in motion and parts of the human body, from internal organs dedicated to medical applications to skin, faces and hair needed for character animation), the vegetal realm (complex vegetal shapes, specific material such as tree barks, animated prairies, meadows and forests) and the mineral realm (mud-flows, avalanches, streams, smoke, cloud).
The fundamental tools we develop and their applications to specific natural scenes are opportunities to enhance our work through collaborations with both industrial partners and scientists from other disciplines (the current collaborations are listed in Section 7 and 8). This section briefly reviews our main application domains.
The main industrial applications of the new representation, animation and rendering techniques we develop, in addition to many of the specific models we propose for natural objects, are in the audiovisual domain: a large part of our work is used in joint projects with the special effects industry and/or with video games companies.
Some of the geometric representations we develop, and their efficient physically-based animations, are particularly useful in medical applications involving the modeling and simulation of virtual organs and their use in either surgery planning or interactive pedagogical surgery simulators. All of our applications in this area are developed jointly with medical partners, which is essential both for the specification of the needs and for the validation of results.
Some of our work in the design and rendering of large natural scenes (mud flows, rock flows, glaciers, avalanches, streams, forests, all simulated on a controllable terrain data) lead us to very interesting collaborations with scientists of other disciplines. These disciplines range from biology and environment to geology and mechanics. In particular, we are involved in inter-disciplinary collaborations in the domains of impact studies and simulation of natural risks, where visual communication using realistic rendering is essential for enhancing simulation results.
Some of the new geometrical representations and deformation techniques we develop lead us to design novel interactive modeling systems. This includes for instance applications of implicit surfaces, multiresolution subdivision surfaces, space deformations and physically-based clay models. Some of this work is exploited in contacts and collaborations with the industrial design industry.
Lastly, the new tools we develop in the visualisation domain (multiresolution representations, efficient display for huge data-sets) are exploited in several industrial collaborations involving the energy and drug industries. These applications are dedicated either to the visualisation of simulation results or to the visualisation of huge geometric datasets (an entire power plant, for instance).
Although software development is not among our main objectives, the various projects we are conducting lead us to conduct regular activities in the area, either with specific projects or through the development of general libraries.
SOFA is a C++ library primarily targeted at medical simulation research. Based on an advanced software architecture, it allows to (1) create complex and evolving simulations by combining new algorithms with algorithms already included in SOFA; (2) modify most parameters of the simulation – deformable behavior, surface representation, solver, constraints, collision algorithm, etc. – by simply editing an XML file; (3) build complex models from simpler ones using a scene-graph description; (4) efficiently simulate the dynamics of interacting objects using abstract equation solvers; and (5) reuse and easily compare a variety of available methods (see figure ).
SOFA is developed in collaboration with two other INRIA project-teams: Alcove and Asclepios, as well as at the SIM group in the Massachussets General Hospital (MGH). The first offical release occured in February 2007. It was presented to the medical community at conference Medicine Meets Virtual Reality (MMVR'07) , as well as conference SURGETICA 2007 .
SOFA is currently evolving toward an INRIA AEN program (Action d'Envergure Nationale).
The MobiNet software allows for the creation of simple applications such as video games, virtual physics experiments or pedagogical math illustrations. It relies on an intuitive graphical
interface and language which allows the user to program a set of mobile objects (possibly through a network). It is available in public domain for Linux and Windows at
http://
Besides "engineer weeks", every year a group of "monitors" PhD students conducts an experimentation based on MobiNet with a high scool class in the frame of the courses (see Section ). Moreover, presentation in workshops and institutes are done, and a web site repository is maintained.
Proland (for procedural landscape) is the software platform developed for the NatSim project (see ). The goal of this platform is the real-time rendering and editing of large landscapes. It currently integrates several results of the EVASION project-team on top of a terrain rendering algorithm: grass, individual animated trees, and volumetric forests (and the goal is to integrate future results, such as clouds or animated rivers). The terrain data can come from digital sources (elevation data, satellite photographs) and from vector data (spline curves representing linear features such as roads and rivers and areal features such as fields and forests). The vector data can be edited in real-time and can be used to constrain the vegetation (trees along roads or inside forests, but not in rivers).
MaTISSe is a software dedicated to free form shape Modeling and Animation through Interactive Sketching and Sculpting gestures. The goal is to provide a very intuitive way to create digital shapes, as easy to use for the general public as roughly sketching a shape or modeling it with a piece of clay. Our first prototype was developed in 2007, in the framework of a research contract with the company Axiatec . It enables to progressively blend 3D shapes created - possibly at different scales and from different viewing angles - by painting their 2D projection. This prototype, written in C++ as an extension of the Ogre open-source library, heavily relies on our previous work on sketch-based modeling using skeletons and convolution surfaces . Future extensions should include the combination of sketching with gestures related to clay sculpting, such as deforming a shape through pulling or pushing gestures.
This work is done in collaboration with Stefanie Hahmann from LJK. A collaboration is also taking place on this topic with Prof. Gershon Elber from Technion, in the framework of the Aim@Shape Network of Excellence (see Section ). The purpose of this research is to allow complex nonlinear geometric constraints in a multiresolution geometric modeling environment. This year, constraints of constant volume for the multiresolution deformation of BSpline tensor-product surfaces as well as subdivision surfaces have been investigated. Fig. illustrates the deformation of a subdivision surface with constant volume. The work on multiresolution subdivision surfaces has been published at Eurographics in . The work on BSpline tensor-product surfaces has been accepted for publication in the journal CAGD, and will appear next year. A survey on integrating constraints into multiresolution models has been written in collaboration with Prof. Elber, and published in . The work on multiresolution curves has also been applied to the problem of morphing - or deforming - one curve into another. It has been published in .
This year, our work on interactive sculpting techniques focused on developing ways to interact with virtual clay.
The layered volumetric model we previously developed for virtual clay achieves the desired plausibility in real-time, but opens the problem of providing intuitive interaction tools. The master thesis of Adeline Pihuit, co-advised by Paul Kry and Marie-Paule Cani in 2007, addressed this problem. She combined the virtual clay model with a compliant virtual hand, used as a deformable tool for sculpting it. She experimented with several devices for controlling the motion and deformation of the virtual hand. In particular, she developed a prototype where a soft ball serving as an avatar for the virtual clay is attached to a force feedback device (phantom). The ball is augmented with force sensors, to ease the control of the deformable virtual hand that sculpts the clay.
The first results were presented at the French conference on virtual reality . User studies will be held soon to validate our contributions, thanks to funds from the PPF "Multimodal interaction" (see Section ).
Sketch-based techniques are currently attracting more and more attention, being recognized as a fast and intuitive way of creating digital content. We are exploring these techniques from two different view-points:
A first class of sketching techniques directly infer free-form shapes in 3D from arbitrary progressive sketches, without any a priori knowledge on the objects being represented. In collaboration with Loic Barthes from the IRIT lab in Toulouse, we are studying the use of convolution surfaces for achieving this goal: the user paints a 2D projection of the shape. A skeleton (or medial axis), taking the form of a set of branching curves, is reconstructed from this 2D region. It is converted into a close form convolution surface whose radius varies along the skeleton. The resulting 3D shape can be extended by sketching over it from a different viewpoint, while the blending operator used adapts its action so that no detail is blurred during the process. This work was supported by a direct industrial contract with the firm Axiatec (see Section ), and lead to the development to a first prototype of the MaTISSe software. This project will go on through the PhD of Adeline Pihuit, started in October 2007, under the supervision of Olivier Palombi and Marie-Paule Cani: the aim will be to explore the use of sketch-based modeling in the context of the teaching of anatomy, where a professor progressively sketches a simplified view of one or several organs and explains their action, which could lead to some animation.
Other sketching techniques are able to create a complex shape from a single sketch, using some a priori knowledge on the object being drawn for inferring the missing 3D information. This is the topic of Jamie Wither's PhD, advised by Marie-Paule Cani. In collaboration with Alla Sheffer from UBC, Canada, we studied the reconstruction of developable surfaces from sketches and applied it to the sketching of objects made of metal, leather or cloth . We are currently studying the sketching of fractal-like objects, such as trees (in collaboration with Frédéric Boudon from CIRAD, Montpellier) or clouds, whose shape can be sketched and refined at different scales. This last part of the work is one of our contributions to the ANR project NatSim (see Section ).
Animation of a 3D model is usually made using a hierarchical representation of its articulations called the animation skeleton. Creation of this animation skeleton is a laborious task since it is made by hand. During his Master thesis, Grégoire Aujay has developed an algorithm which uses geometrical information on the model to automatically compute a hierarchical geometric skeleton, which can be converted into an animation skeleton. This method has then been improved to create skeletons which match animation skeletons in both the biped and quadruped cases . User intervention is restricted to the selection of one or a few points at the very beginning of the process, but the algorithm is customizable and users can adjust the skeleton to their needs. This work has been done in collaboration with Francis Lazarus, from the GIPSA-Lab in Grenoble.
Skinning, which consists in computing how vertices of a character mesh (representing its skin) are moved during a deformation w.r.t. the skeleton bones, is currently the most tedious part in the skeleton-based character animation process. We propose new geometrical tools to enhance current methods. First, we develop a new skinning framework inspired from the mathematical concept of atlas of charts: we segment a 3D model of a character into overlapping parts, each of them being anatomically meaningful (e.g., a region for each arm, leg, etc., with overlaps around joints), then during deformation the position of each vertex in an overlapping area is updated thanks to the movement of neighboring bones. This work has been done in collaboration with Boris Thibert from the MGMI team of the LJK, Cédric Gérot and Annick Montanvert from the GIPSA-Lab in Grenoble, and Lin Lu from the University of Hong Kong.
Aneurysms are excrescences on blood vessels. They can break, letting the blood propagate outside the vessel, which often leads to death. In some cases, the blood clots sufficiently fast so that people survive. However, a neurosurgeon or a neuroradiologist should intervene very quickly in order to repare the vessel before the aneurysm breaks once more.
The purpose of this research is to help neurosurgeons and neuroradiologists to plan surgery, by giving them quantitative information about the size, shape and geometry position of aneurysms. The first part of this work has been done during the internship of Sahar Hassan: we have developed a simple algorithm for the automatic detection of aneurysms on CTA images.
This work continued in 2007 with the Master thesis of Sahar Hassan, detection and masure of aneurysms is implemented in an application which will be evaluated by a radiologist at the Grenoble Universitary Hospital. We plan to publish the method in the medical literature.
We address the question of simulating highly deformable objects. This year, we have focused on the collision detection and response between arbitrary deformable or rigid bodies, using density fields, as illustrated in figure . A new method was implemented in SOFA and will be submitted to a high level international conference.
We continue a collaboration on surgical simulation with laboratory TIMC through a co-advised Ph.D. thesis. Its purpose is to develop new models of finite elements for the interactive physically-based animation of human tissue.
Based on the hexahedral finite elements we developed last year, we are creating a new multigrid method to speed up the computations. An example of multigrid model is shown in figure .
In the next future, we plan to apply this new approach to the simulation of patient-specific organ models.
Realistically predicting the shape and motion of human hair requires an accurate mechanical model for strands, a convincing extension to wisp dynamics and a good model for their interactions within hair. Following our collaboration with L'Oréal research labs and with Basile Audoly from CNRS, Paris, which lead to the Super-Helices model for hair, we presented a synthesis of the different techniques we developed for hair modeling, animation and rendering within a course presented at SIGGRAPH 2007 . We also developed, in collaboration with Florence Bertails (postdoc at UBC, Canada) an interface for creating a physically-based hair-style from a single sketch: we used a few strokes to define some sample hair strands and to tune the hair volume, and used this information to infer the parameters of the static version of super-helices (Figure ). This work, which also enables to create hair-styles by annotating a real photograph, was presented at the IEEE Shape Modeling and Applications conference .
Qizhi Yu is working on this topic as a Marie Curie PhD student (Visitor program), supervised by Fabrice Neyret. The purpose is to obtain a realistical detailed appareance of landscape-long animated rivers in real-time, with user-editable features. The idea is to separate the river simulation into 3 scales, corresponding to different specification and simulation tools: macroscale for the topographic shape and global flow characteristics (relying on simple CFD at coarse resolution), mesoscopic scale for the local wave patterns (relying on dedicated phenomenological models), microscopic scale for the details (relying on textural procedural schemes). Note that this topic is included in the scope of the NatSim collaboration (see Section ).
The PhD of Mathieu Coquerelle, co-advised by Marie-Paule Cani and Georges-Henri Cottet, explores the use of vortex particules for animating liquids and gases and to simulate their interactions with rigid solids.
Natural mountains and valleys are gradually eroded by rainfall and river flows. Physically-based modeling of this complex phenomenon is a major concern in producing realistic synthesized terrains. The objective of this project is to develop proper models for simulating and visualizing these specific natural phenomena. We take advantage of the high parallel computing ability and new features of the latest graphics hardware to accelerate the simulation and visualization process. We proposed a new erosion simulation method based on an efficient shallow water model carefully designed to run entirely on GPU .
The motion of animals is still a challenging problem in 3D animation, both for articulated motion and deformation of the skin and fur.
In several domains of character animation, footsteps are one of the most important constraints. It guarantees one of the main aspects of a realistic animation of locomotion. This task, when done manually, is even more complex for quadrupeds. Being able to automatically predict the footstep information from a video footage is thus an important contribution. The method developed is based on the design of a dedicated image filter to detect the pattern of animal legs. Along the time range of the video, the positive filter responses are clustered so that a single trajectory point is given per leg. As 2D images are considered (profile view), there exist ambiguities in the prediction of each individual foot position when side views of legs are crossing each other (typically left and right side of the animal, and front and back legs for higher velocities). A motion model has been developed to take into account this problem. This work has been done in collaboration with the University of Washington, in Seatle, USA.
As for now, these works on footsteps are still in progress and are not published yet. A collaboration with the National Museum of Natural History has been started to develop these video techniques in the framework of a new theoritical perspective investigated recently by the Museum about a parametric description of animal gaits. Works are done on a massive database of video of military dogs provided by the Museum.
In addition to footstep analysis, most of the activity on motion from video for this year has been dedicated to the ANR project Kameleon (Fig.
). The experimental set-up has been finalized for this project and it is fully operationnal since june 2007. This set-up
uses 4 high-speed cameras synchronized with x-ray video. These five views are now correctly calibrated. Several projects can now be started, especially the learning of motion patterns for
head and limbs in rat locomotion. In parallel, a new post-doc has been appointed and one of the first achievement has been to develop a real-time solution for 3D reconstruction of rat
skeleton from static x-ray images using latest GPU technique. This 3D skeleton has been used to develop a physical model of rat legs that can be animated from x-ray videos. Results can be
seen on the project website.
http://
This work investigates how the complex motion of plants and trees under wind effect can be analyzed from video and retarget to complex 3D model to create realistic animation. In 2006, a first method has been proposed and published at the SCA conference in 2006 in collaboration with the U. of Toronto. This work is now continued in cooperation with the laboraties from INRA dedicated to the physiology of trees and the fluid mechanics laboratory of Ecole polytechnique, within the ANR project Chêne et Roseaux started in 2007. One of the achievement of this collaboration has been to develop a visualisation software of the mechanical modes computed for tree structures.
In addition, Julien Diener is currently spending a 6 month-stay at the DGP laboratory of U. of Toronto to perform a user study on a motion sketching tool adapted to the measurement of tree motion from video.
This is a new project motivated by previous works of Maxime Tournier in the context of a collaboration between EVASION and Ubisoft. This collaboration has not been continued in 2007 but a new project has started with other video game companies (see the GENAC project Section ). The goal of this work is to develop new mathematical foundation of mutlivariate analysis on unit quaternions. Unit quaternions are the canonical representation of 3D rotations involved for articulated body description and multilinear analysis on such parameters is challenging due to the intrinsic non-linearity of unit quaternion interpolation. Currently, a formulation of PCA as Principal Geodesic Analysis applied on the hypersphere of unit quaternions is investigated. A collaboration on this subject has also been started withtin the ARC Fantastik (see Section ).
The energy industry sector has to perform numerical simulation on very large data sets, in thermodynamics, mechanics, aerodynamics, neutronics, etc. Visualization of the results of these simulations is crucial in order to gain understanding of the phenomena that are simulated. The visualization techniques need to be interactive - if not real time - to be helpful for engineers. Therefore multiresolution techniques are required to accelerate the visual exploration of the data sets. In the PhD of Fabien Vivodtzev (who is now working at CEA on Visualization systems) we have developed multiresolution algorithms devoted to volumetric data sets based on tetrahedral grids in which inner structures of dimension 2, 1 or 0 are preserved. Typically these algorithms are used to compute a sequence of simplified volumetric meshes with good properties. In his first PhD year, Sébastien Barbier has worked on the interactive rendering of these simplified meshes. He is integrating today's standard visualization algorithms - including slicing, iso-surfacing, volume rendering - with the multiresolution models developed previously. This work has been published in .
This project is part of a collaboration with the research and development department of EDF, and with LPPA (Laboratoire de Physiologie de la Perception et de l'Action, Collège de France). The focus in this project in on the following problem: How should human perception be taken into account in Visualization algorithms, and more specifically in algorithms based on multiresolution techniques. Previous works in this area are mostly based on image analysis techniques, that are used to measure important features in a static image resulting from some visualization algorithm. These results do not take into account information on the specific person using the visualization system. We are especially interested in taking into account such information, like the point where the user is looking at. We also want to insert dynamic parameters in the perceptive measure, like the movement of the user's head, since such parameters greatly influence the actual perception of the rendered scene. In the framework of this collaboration, EDF is funding a PhD grant on these topics, started by Christian Boucheny in December 2005. Last year we have worked on a perceptive evaluation of Direct Volume Rendering (DVR) techniques. We found out some limitations of DVR techniques in the perception of depth, and have shown how dynamic rendering allows in some cases to overcome these limitations. This work has been published in . An extended version has been submitted to the journal ACM TAP.
The goal of this work is the real-time rendering and editing of large landscapes with forests, rivers, fields, roads, etc. with high rendering quality, especially in term of details and continuity. A first step toward this goal is the modeling, representation and rendering of the terrain itself. Since an explicit representation of the whole terrain elevation and texture at the maximum level of detail would be impossible, we generate them procedurally on the fly (completely from scratch or based on low resolution digital elevation models). Our main contribution, in this context, is to use vector-based data to efficiently and precisely model linear features of the landscape (such as rivers, hedges or roads), from which we can compute in real time the terrain texture and the terrain elevation (in order to correctly insert roads and rivers in the terrain - see figure ). We demonstrate the scalability of our approach with a 100×100km 2terrain in the Alps. We also show how the vector data can be used to control the procedural generation of vegetation and other objects on the terrain (such as bridges).
This project is developed in the context of a European Marie Curie Outgoing International Fellowship funding (Marie Curie OIF, see Section ) allowing Philippe Decaudin to work as a visiting researcher at CASIA (Chinese Academy of Science, Institute of Automation, in Beijing) for the first phase of the project, and to carry out the second phase at EVASION.
The main objective is to define a representation and algorithm able to efficiently visualize 3D models of plants and trees like those developed by the GreenLab team of CASIA. This representation and this algorithm are required to allow the interactive exploration of virtual landscapes and ecosystems. They must be able to render vegetation at interactive framerate.
We have focused our work on mid-range and far distance views (see Figure ). In this context, texture-based volume rendering is an interesting alternative to polygon-based rendering for plants and trees visualization. This leads us to the design of an efficient level-of-detail representation well-suited to the display of a large number of complex objects (a dense forest for instance) in real-time. This is an image-based representation, which means that it is independent of the geometric complexity of the object and its rendering cost mainly depends on the rasterization cost of the image projected on the screen. However, this representation remains truly 3D, which means that parallax effects are preserved (contrary to simple billboard representations for instance) as well as possible integration with polygonal elements. It can also rely on MIP-mapping to use a filtered version of the volume with respect to the projected size of the voxels onto the screen, leading to anti-aliased display of the object.
In order to optimize the rendering of such volumes, we also developed a simple kd-tree based space partitioning scheme to efficiently remove the empty spaces from the volume data sets in a fast preprocessing stage. The splitting rule of the scheme is based on a simple yet effective cost function evaluated through a fast approximation of the bounding volume of the non-empty regions. The scheme culls a large number of empty voxels and encloses the remaining data with a small number of axis-aligned bounding boxes (Figure ), which are then used for interactive rendering. This work has been co-developed with Vincent Vidal during his internship at CASIA.
Antoine Bouthors continues his PhD on Cumulus clouds. The purpose is to study and model a high-quality real-time illumination model embedding the main local and global lighting effects in reflectance and transmitance (halo, glory, pseudo-specular, diffusion, etc.) in the form of a local shader. This year, we generalized the model to take into account complex cloud shapes such as cumulus cloud, in collaboration with Nelson Max (U. Davis-LLNL) where Antoine spent his Eurodoc stay.
Cyril Crassin conducted his Master Project with Fabrice Neyret and Sylvain Lefebvre (Reves project) on the the real-time rendering of very large and detailed volumes, taking advantage of
GPU-adapted data-structure and algorithms. The main target corresponds to the cases where detail is concentrated at the interface between free space and clusters of density found in many
natural volume data such as cloudy sky or vegetation, or data represented as generalized parallax maps, hypertextures or volumetric textures. Our method is based on a dynamic
N3tree storing MIP-mapped 3D texture bricks in its leaves. We load on the fly on GPU only the necessary bricks at the necessary resolution, taking into account visibility. This maintains
low memory consumption during interactive exploration and minimizes data transfer. Our ray marching algorithm benefits from the multiresolution aspect of our data structure and provides
real-time performance.
The above sections presented our research in terms of fundamental tools, models and algorithms. A complementary point of view is to describe it in terms of application domains. The following sections describe our contribution to each of these domains, with references to the tools we relied on if they were already presented above.
Several of the tools we are developing are devoted to a new generation of interactive modeling systems:
The real-time physically-based model for virtual clay presented in Section is dedicated to a sculpting system as close as possible to interaction with real clay.
The sketching tools presented in Section have been used to model garments and hair, and are being extended to model more general free form shapes. They are used in the industrial contract with Axiatec (see Section ).
Many of the diverse fundamental tools we are developing (see Sections , , , and ) are contributing to the long term, general goal of modeling and animating natural scenes. They can be combined to allow the large scale specification, efficient rendering and animation of landscapes (rivers and cloudy skies, etc). The synthesis of complete natural sceneries is one of the aims of the NatSim project (see Section ).
Some of our work on geometric modeling and physically-based animation has been successfully applied to the medical domain:
Our tools for efficient physically-based simulation, and in particular our new contributions to collision detection and response (see Section ), is being used in a new European medical project called Odysseus(see Section ).
Furthermore, Mathieu Nesme's PhD research (see Section ), which is co-advised by Yohan Payam of laboratory TIMC, concentrates on the development of improved models for human tissue simulation for surgical simulations.
Several of our new models and algorithms contribute to the animation of virtual creatures. This includes our work on motion capture from video (Section ); the physically-based animation tools (Sections and ).
A first work towards the perceptive evaluation of animation has been achieved in collaboration with the Department of Psychology of the U. of Geneva for facial animation. A study has been made to evaluate what different parts of the brain are activated when a picture of an expressive face is showed to a subject, with gaze pointing towards the subject or not. It has been necessary to adapt a 3D model to standard photographs of expressive faces, so that the eye orientation on the photographs could be accurately controlled in a realistic manner.
This RIAM contrat started in 2006 for 18 months with Bionatics aims at developing tools for the creation of landscape-related digital contents and real-time models, including procedural generation of shape, aspect and motion.
From September 2006 to September 2007, we conducted a project for an industrial contract with a new partner, Axiatec. The aim is to design a very intuitive modeling system for the public, in order that people can model shapes and then receive a 3D print of them, produced by Axiatec. Our product will be based on our research work on sculpting and sketching (see Sections and ).
We are still in close contact with the AMD/ATI and Nvidia development teams providing suggestions and bug reports, and testing prototype boards.
GRAVIT is a Rhône-Alpes region consortium aiming at maturating research results in order to help their industrialization, typically by supporting the development of platforms. EVASION participates to the GRAVIT project GVTR (GeoVisualisation en Temps Réel) with ARTIS and REVE INRIA project-teams. The goal is to obtain a real-time engine in which it is easy to plug new research results. The duration was 1 year (October 2006 to Septembre 2007), extended to August 2008. An engineer, Cédric Manzoni, started with the plateform developed for the NatSim project (combining dedicated models for nature simulation, see Section ), which he is extending to open the API to ease the integration of new modules.
The GENAC project is supported by the "Pôle de Compétitivité Imaginove" from Lyon. The goal of this project is to develop procedural tools for the animation of virtual characters and rendering of complex lighting in the specific case of video games. Participants are EVASION project, ARTIS project, LIRIS laboratory in Lyon 1, Eden Games and Widescreen Games (video games companies in Lyon). The role of EVASION is specifically to provide procedural tools to combine motion capture data and physical simulation of 3D characters. Works on compression of motion capture database by Maxime Tournier in his Master Research will be extended to integrate physical parameters. For this goal, Michael Adam has achieved an adaptation of the SOFA library to handle articulated rigid body dynamics in the context of the GENAC project.
The mission of AIM@SHAPE is to advance research in the direction of semantic-based shape representations and semantic-oriented tools to acquire, build, transmit, and process shapes with their associated knowledge. We foresee a new generation of shapes in which knowledge is explicitly represented and, therefore, can be retrieved, processed, shared, and exploited to construct new knowledge. This Network of Excellence started in December 2003 and will end in December 2007.
Odysseus is a European EUREKA project on the simulation of laparoscopic surgery, running from 2004 to 2007. Driven by IRCAD, it involves two industrial partners (Karl Storz, SimSurgery) and three research projects of INRIA: EVASION, EPIDAURE, ALCOVE. The overall project is to develop commercial products for collaborative diagnosis and patient-specific planning. Our participation is related to the planning and real-time simulation of surgery using patient-specific data. Our contribution is based on the SOFA library described in Section . This project is now finished, and we are submitting a new one called "Passport for Virtual Surgery" to continue the work.
Current techniques for animating the skin surface of a virtual creature from the motion of its skeleton do not take into account complex phenomena such as the rolling of the internal tissue over the bones. In order to tackle this problem, a research project has been initiated between EVASION and the National Museum of Natural History in Paris. For the study of locomotion, the Museum has access to Xray video of live animals, making possible to visualize the motion of the internal skeleton during locomotion. Using standard stereovision techniques, the 3D surface of the animal will be extracted in order to be correlated into an innovative machine learning framework with the internal Xray data of the skeleton. As a result of this learning phase, more realistic 3D motion of the skin will be achieved, controlled by standard 3D skeleton motion. This project is sponsored by an ANR grant for the next three years. It gathers 4 participants: EVASION project, National Museum of Natural History, Université de Rennes and Université Paris 5. This is a 3 year project and it has started in 2006.
This project aims at developing new techniques and hybrid representations to model, visualize, animate and transmit natural scenes. It involves EVASION, IRIT and LABRI. EVASION is involved in two workpackages: edition tools in order to "sketch" high level user specification concerning the landscape, and procedural modeling, simulation and real-time rendering of complex scenes (such as terrain, rivers, clouds, trees). The project has started on 22/12/2005 and ends on 21/12/2008.
The aim of the project CHENE-ROSEAU is to develop and test experimental and theoretical methodologies for the analysis and simulation of the motion of plants induced by wind. The knowledge of motion of plants under wind is important for several reasons. First, in terms of plant damage, excessive motion can lead to lodging in crops, where the deformation becomes permanent or to wind throw in isolated or grouped trees. Second, motion of lesser amplitude is known to influence plant growth (thigmomorphogenesis), particle spreading, liquid retention on leaves or light spreading inside a canopy. More recently applications in computer graphics for video games or movies have appeared, where the simulation of realistic plant motion is still a major difficulty. Partners for this project are the Laboratoire d'Hydrodynamique-LadHyX (Ecole polytechnique), UMR 547 PIAF (Physiologie Intégrée de l'Arbre Fruitier et Forestier) (INRA), UR Ephyse (Ecologie fonctionnelle et Physique de l'Environnement) (INRA) and Projet EVASION (INRIA Rhône-Alpes). This is a 3 year project and it has started in 2007.
The ARC Fantastik investigates the use of the particles filter formalism to allow solving the inverse dynamic problem while taking into account mechanical constraints. The participants are EVASION project (coordinator), PERCEPTION project, Centre de Mathématiques et de Leurs Applications (CNRS UMR 8536), ENS Cachan, SABRES and SAMSARA laboratories from Université de Bretagne Sud.
Analyse Multirésolution d'objets 3D animés is a project funded by the GdR ISIS (Groupement de Recherche Information, Signal, Images et ViSion) from CNRS, in which we aim at developing multiresolution analysis techniques for animated 3D objects (e.g. for compression). We work on this project with Cédric Gérot from the GIPSA-Lab in Grenoble, Frédéric Payan from the I3S lab (University of Nice) and Basile Sauvage from the LSIIT lab (University of Strasbourg).
We drive the MIDAS (Modèles Interactifs Déformables pour l'Aide à la Surgétique) project, which also involves laboratories TIMC and ICP, from 2005 to 2007. The goal of this project is to provide the biomechanics community with physically-based deformable models fast enough for use in per-operative planning, or in the context of trial-and-error parameter tuning where a large number of simulation are performed. These models (tetrahedral and hexahedral FEM) have been implemented in Sofa and compared to real data and to simulations made using the Ansys software during a student summer internship .
We also drive the MEGA (MEthodes Géométriques de décomposition et déformation de surfaces pour l'Animation 3D) project, involving the GIPSA-Lab (image processing) and the MGMI team from the LJK (applied maths). This projects aims at developing new approaches to surface deformation for character animation, using new geometrical models and tools with strong mathematical basis. It began in 2006 and ends in December 2007. Works described in Section 6.1.5 are part of this project.
As a team of the LJK laboratory, we participle to the PPF (plan pluri-formation) 'Multimodal interaction' funded by the four universities of Grenoble, with GIPSA-lab, LIG, TIMC, LPNC. This
year, we started a collaboration with Renaud Blanch from the IIHM group of the LIG, on the evaluation of the sketching and sculpting systems we developed for creating 3D shapes. See
http://
LIMA (Loisirs et Images) is a Rhône-Alpes cluster project in the ISLE cluster (Informatique, Signal, Logiciel Embarqué). It federates many laboratories of the Rhône-Alpes region (LISTIC, LIRIS, LIS, CLIPS, LIGIV, LTSI, ICA and LJK ARTIS, EVASION, LEAR, MOVI) around two research themes: analysis and classification of multimedia data, and computer graphics and computer vision. The objectives are to index multimedia data with "high level" indexes, and to produce, analyze, animate and visualize very large databases, such as very large natural scenes.
Philippe Decaudin got a European Marie Curie Outgoing International Fellowship mobility funding (Marie Curie project REVPE MOIF-CT-22230) to go to Beijing to collaborate with the CASIA (Chinese Academy of Science, Institute of Automation).
Jamie Wither, Qizhi Yu, Xing Mei and Robert Visser got a European Visitor mobility funding to come from their country to our laboratory for periods ranging from 4 months to 3 years.
Each year several students get a regional Exploradoc grant to spend several months in another laboratory in another country. This year Julien Diener does a long stay at the DGP lab of the University of Toronto (from 05/2007 till 01/2008).
Lionel Reveret has been the coordinator of the associate team I-MAGE with the DGP laboratory of the University of Toronto. This associate team as been started in 2005 and it will be continued in 2008.
Nominated “fellow of Eurgraphics” in 2005, Marie-Paule Cani is an elected member the executive committee of the Eurographics association, and is also a member of the executive committee of the French chapter of Eurographics. She also represents the lab LJK, to which EVASION belongs, in the executive committee of the French computer graphics association (AFIG).
Marie-Paule Cani has been since 2002 a steering committee member of the ACM-EG Symposium on Computer Animation, and was nominated in 2006 steering committee member of the IEEE Shape Modeling & Applications conference. She is also a member of the Eurographics Workshop Board, as a co-chair of the EG working group on Computer Animation. Lastly, she is a member of the EG working group on Sketch-Based Interfaces and Modeling.
Editorial boards:
Marie-Paule Cani is member of the editorial board of IEEE Transactions on Visualization and Computer Graphics.
Program Committees:
Marie-Paule Cani served in the advisory board of SIGGRAPH'2007 (3 people from the program committee helping the Paper Chair to take decisions), and worked at the fall 2007 as a member of the program committee of Eurographics 2008. She was also a PC member of the ACM I3D’2007 conference (Interactive 3D Graphics and Games). She will be paper co-chair of Sketch-Based Interfaces & Modeling, which will be held in Annecy in June 2008.
Lionel Reveret was a member of the EG/ACM SCA'07 program commitee held in San Diego, CA, 2007.
Fabrice Neyret was part of Siggraph'07 and GI'07 program committees.
Fabrice Neyret gave an invited talk at the LadHyX (Laboratoire d'Hydrodynamique de l'Ecole polytechnique) in March 2007.
Marie-Paule Cani has been active for several years in associations promoting parity in sciences, such as the "Association pour la Parité dans les Métiers Scientifiques et Techniques". During a research stay at the University of Otago, New Zealand in March 2007, she gave an invited talk at the Anthropology/Gender studies seminar: "Computers: a toy for males, a great tool for females?". She also helped on a stand dedicated to “Women and Sciences” at the Science en Fête forum in Grenoble, in October 2007.
The 'MobiNet' team (Joelle Thollot (at ARTIS), Fabrice Neyret and Franck Hétroy (at EVASION) plus a dozen of temporary assistants) organizes 8 practices per year on a
half-day basis for about 150 senior high school students in the scope of INPG "engineer weeks". The purpose is to give a more intuitive practice of math and physics, and to give insights on
programming and engineering. See Section
and
http://
In addition to "engineer weeks", every year a group of "monitors" PhD students conducts an experimentation based on MobiNet with a high school class in the frame of the lectures. This
year, 4 students advised by Fabrice Neyret and Frank Hetroy prepared class exercices for math and physics class in collaboration with the teachers on the topic of statistical physics
(reports and booklets available on MobiNet web site:
http://
Moreover, Fabrice Neyret presents the tool and the experiment in various workshop and institutes ("journées Greco", IREM, IUFM, etc.). He also maintains a web site repository.
Fabrice Neyret takes part of various publics operations:
He co-animates some operations:
He participates to the "Observatoire Zetetique" (
http://
He is in the leading staff of the “Cafés Sciences et Citoyen” team (Grenoble), funded by the communication department of CNRS, and he maintains the web site http://www-evasion.imag.fr/cafesSC/. Conferences are organised on a monthly basis.
He maintains web sites for various events, plus some more related on research topics.
He participates as an expert to various public debates and press interviews.
In addition to the regular teaching activities (UJF, INPG) of the faculty members, several researchers at EVASION taught some courses within the "Computer Science" Research Master, the "Mathematic Engineering" Master and to the 3rd year "Image and Virtual Reality" of ENSIMAG.
Georges-Pierre Bonneau is Director of the Doctorate School of Computer Science, Applied and Pure Mathematics, Grenoble, and head of department "Geometry & Image" in the research laboratory LJK http://www-ljk.imag.fr