With the fast increase of computational power and of memory space, increasingly complex and detailed 3D content is expected for virtual environments. Unfortunately, 3D modeling methodologies did not evolve as fast: most users still use standard CAD or 3D modeling software (such as Maya, 3DS or Blender) to design each 3D shape, to animate them and to manually control cameras for movie production. This is highly time consuming when large amounts of detailed content need to be produced. Moreover the quality of results is fully left in the user's hand, which restricts applicability to skilled professional artists. More intuitive software such as Z-Brush are restricted to shape design and still require a few months for being mastered by sculpture practitioners. Reducing user load can be done by capturing and re-using real objects or motions, at the price of restricting the range of possible content. Lastly, procedural generation methods can be used in specific cases to automatically get some detailed, plausible content. Although they save user's time, these procedural methods typically come at the price of control: indirect parameters need to be tuned during a series of trial and errors until the desired result is reached. Stressing that even skilled digital artists tend to prefer pen and paper than 3D computerized tools during the design stages of shapes, motion, and stories, Rob Cook, vice president of technology at Pixar animation studios recently stated (key-note talk, Siggraph Asia 2009): new grand challenge in Computer Graphics is to make tools as transparent to the artists as special effects were made transparent to the general public.
Could digital modeling be turned into a tool, even more expressive and simpler to use than a pen, to quickly convey and refine shapes, motions and stories? This is the long term vision towards which we would like to advance.
The goal of the IMAGINE project is to develop a new generation of models, algorithms and interactive environments for the interactive creation of animated 3D content and its communication through virtual cinematography.
Our insight is to revisit models for shapes, motion, and narration from a user-centred perspective, i.e. to give models an intuitive, predictable behaviour from the userâs view-point. This will ease both semi-automatic generation of animated 3D content and fine tuning of the results. The three main fields will be addressed:
Shape design: We aim to develop intuitive tools for designing and editing 3D shapes and their assemblies, from arbitrary ones to shapes that obey application-dependent constraints - such as, for instance, developable surfaces representing cloth or paper, or shape assemblies used for CAD of mechanical prototypes.
Motion synthesis: Our goal is to ease the interactive generation and control of 3D motion and deformations, in particular by enabling intuitive, coarse to fine design of animations. The applications range from the simulation of passive objects to the control of virtual creatures.
Narrative design: The aim is to help users to express, refine and convey temporal narrations, from stories to educational or industrial scenarios. We develop both virtual direction tools such as interactive storyboarding frameworks, and high-level models for virtual cinematography, such as rule-based cameras able to automatically follow the ongoing action and automatic film editing techniques.
In addition to addressing specific needs of digital artists, this research contributes to the development of new expressive media for 3D content. The long term goal would be to enable any professional or scientist to model and interact with their object of study, to provide educators with ways to quickly express and convey their ideas, and to give the general public the ability to directly create animated 3D content.
As already stressed, thinking of future digital modeling technologies as an Expressive Virtual Pen enabling to seamlessly design, refine and convey animated 3D content, leads to revisit models for shapes, motions and stories from a user-centered perspective. More specifically, inspiring from the user-centered interfaces developed in the Human Computer Interaction domain, we introduced the new concept of user-centered graphical models. Ideally, such models should be designed to behave, under any user action, the way a human user would have predicted. In our case, user's actions may include creation gestures such as sketching to draft a shape or direct a motion, deformation gestures such as stretching a shape in space or a motion in time, or copy-paste gestures to transfer some of the features from existing models to other ones. User-centered graphical models need to incorporate knowledge in order to seamlessly generate the appropriate content from such actions. We are using the following methodology to advance towards these goals:
Develop high-level models for shapes, motion and stories that embed the necessary knowledge to respond as expected to user actions. These models should provide the appropriate handles for conveying the user's intent while embedding procedural methods that seamlessly take care of the appropriate details and constraints.
Combine these models with expressive design and control tools such as gesture-based control through sketching, sculpting, or acting, towards interactive environments where users can create a new virtual scene, play with it, edit or refine it, and semi-automatically convey it through a video.
Validation is a major challenge when developing digital creation tools: there is no ideal result to compare with, in contrast with more standard problems such as reconstructing existing shapes or motions. Therefore, we had to think ahead about our validation strategy: new models for geometry or animation can be validated, as usually done in Computer Graphics, by showing that they solve a problem never tackled before or that they provide a more general or more efficient solution than previous methods. The interaction methods we are developing for content creation and editing rely as much as possible on existing interaction design principles already validated withing the HCI community. We also occasionally develop new interaction tools, most often in collaboration with this community, and validate them through user studies. Lastly, we work with expert users from various application domains through our collaborations with professional artists, scientists from other domains, and industrial partners: these expert users validate the use of our new tools compared to their usual pipeline.
This research can be applied to any situation where users need to create new, imaginary, 3D content. Our work should be instrumental, in the long term, for the visual arts, from the creation of 3D films and games to the development of new digital planning tools for theatre or cinema directors. Our models can also be used in interactive prototyping environments for engineering. They can help promoting interactive digital design to scientists, as a tool to quickly express, test and refine models, as well as an efficient way for conveying them to other people. Lastly, we expect our new methodology to put digital modeling within the reach of the general public, enabling educators, media and other practitioners to author their own 3D content.
Our current application domains are:
Visual arts
Modeling and animation for 3D films and games.
Virtual cinematography and tools for theatre directors.
Engineering
Industrial design.
Mechanical & civil engineering.
Natural Sciences
Virtual functional anatomy.
Virtual plants.
Education and Creative tools
Sketch-based teaching.
Creative environments for novice users.
The diversity of users these domains bring, from digital experts to other professionals and novices, gives us excellent opportunities to validate our general methodology with different categories of users. Our ongoing projects in these various application domains are listed in Section 6.
Marie-Paule Cani obtained the annual Chair of Informatics and Computational Sciences of the Collège de France in Paris. She organized a series of lectures about Shaping Imaginary Content: from 3D Digital Design to Animated Virtual Worlds and a symposium with 7 international invited speakers.
Remi Ronfard organized the EG Workshop on Intelligent Cinematography and Editing which was for the first time an Eurographics Workshops.
A demo of the Living Book of Anatomy (PhD work of Armelle Bauer) was presented in the Emerging Technologies exhibition at ACM SIGGRAPH Asia in November.
François Faure and Olivier Palombi are creating a new startup Anatoscope since November 2015 on Digital Anatomy for Personalized Healthcare.
We had 4 paper accepted to ACM SIGGRAPH , , , , and 2 accepted to ACM Transaction on Graphics (TOG) , , one has been presented at ACM SIGGRAPH Asia.
Functional Description
Expressive is a new C++ library created in 2013 for gathering and sharing the models and algorithms developed within the ERC Expressive project. It enables us to make our latest research results on new creative tools - such as high level models with intuitive, sketching or sculpting interfaces - soon available to the rest of the group and easily usable for our collaborators, such as Evelyne Hubert (Inria, Galaad) or Loic Barthe (IRIT, Toulouse). The most advanced part is a new version of Convol, a library dedicated to implicit modeling, with a main focus on integral surfaces along skeletons. Convol incorporates all the necessary material for constructive implicit modeling, a variety of blending operators and several methods for tessellating an implicit surface into a mesh, and for refining it in highly curved regions. The creation of new solid geometry can be performed by direct manipulation of skeletal primitives or through sketch-based modeling and multi-touch deformations.
Participants: Marie-Paule Cani, Antoine Begault, Rémi Brouet, Even Entem, Thomas Delame, Ulysse Vimont and Cédric Zanni
Contact: Marie-Paule Cani
Keywords: 3D modeling - Simulation - Health - Ontologies - Anatomy - Patientspecific - Medical imaging
Functional Description
My Corporis Fabrica (MyCF) is an anatomical knowledge ontology developed in our group. It relies on FMA (Foundational Model of Anatomy), developed under Creative Commons license (CC-by). MyCF browser is available on line, and is already in use for education and research in anatomy. Moreover, the MyCf's generic programming framework can be used for other domains, since the link it provides between semantic and 3D models matches several other research applications at IMAGINE.
Participants: Olivier Palombi, Armelle Bauer, François Faure, Ali Hamadi Dicko
Contact: Olivier Palombi
Simulation Open Framework Architecture
Keywords: Physical simulation - Health - Biomechanics - GPU - Computer-assisted surgery
Functional Description
SOFA is an Open Source framework primarily targeted at real-time simulation, with an emphasis on medical simulation. It is mostly intended for the research community to help develop new algorithms, but can also be used as an efficient prototyping tool. Based on an advanced software architecture, it allows : the creation of complex and evolving simulations by combining new algorithms with algorithms already included in SOFA, the modification of most parameters of the simulation (deformable behavior, surface representation, solver, constraints, collision algorithm, etc. ) by simply editing an XML file, the building of complex models from simpler ones using a scene-graph description, the efficient simulation of the dynamics of interacting objects using abstract equation solvers, the reuse and easy comparison of a variety of available methods.
Participants: François Faure, Armelle Bauer, Olivier Carré, Matthieu Nesme, Romain Testylier.
Contact: François Faure
Scientist in charge: Stefanie Hahmann.
Other permanent researchers: Marie-Paule Cani, Jean-Claude Léon, Damien Rohmer.
Our goal, is to develop responsive shape models, i.e. 3D models that respond in the expected way under any user action, by maintaining specific application-dependent constraints (such as a volumetric objects keeping their volume when bent, or cloth-like surfaces remaining developable during deformation, etc). We are extending this approach to composite objects made of distributions and/or combination of sub-shapes of various dimensions.
Developable surfaces are surfaces which can be unflattened on a plane without being stretched nor squeezed. In other words, they can be made from 2D pattern without change of lengths. They are usually hard to model efficiently as the length condition is non linear. We developed this year two different applications for developable surfaces, once applied for leather product designer, and the other one to virtual paper deformation.
We developed a method to generate 3D models for garments and leather products from designer sketches. Given two or three orthogonal sketched views depicting the silhouette, the seams, and the folds, we automatically compute a 3D developable surface and the corresponding 2D patterns which fits the silhouette and exhibits the designed folds. Our method can handle complex cases where the 2D silhouette actually correspond to a non planar and discontinuous curve on the 3D surface. We also proposed a new efficient approach to improves the developability of the resulting surface while preserving the pre-designed folds. This work has been published in ACM Transactions on Graphics , and we presented it in SIGGRAPH Asia in November.
Within the PhD work of Camille Shreck, we developed the first interactive 3D virtual model of crumpled paper. Deforming virtual paper is especially challenging to model efficiently has crumpling can be seen as singularities on the surface, leading therefore to non smooth surfaces which do not fit well to standard physically based deformation model. We proposed in this work a new geometrical representation of surface especially adapted to model non smooth developable surfaces as a set of planes, cylinders, and generalized cones meeting at the discontinuities of the surface. Our model can dynamically adapts to the surface deformation and to new crumples, while been associated to an optimal mesh triangulation containing very few triangles. Our interactive deformation model interleaves a standard Finite Element Model on the coarse triangular mesh to guide the general deformation, with a geometrical steps adapting our surface structure to optimally sample the degrees of freedom of the crumpled paper. This work as been accepted for publication in ACM Transaction on Graphics , has been presented at the conference WomEncourage , and as a communication in AFIG .
A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. In collaboration with the University College London, Universiteit Utrecht, and KAUST, we proposed a method to automatically detect replaceable subparts within a complex assembly. In this work, we model the geometrical assembly as a graph where each node represent a single component, and the edges represents inter-part connectivity. Our method analyses this graph to detect similar inter-part connectivity enabling to exchange or mix sub-structures to synthesise new geometrical models. This work has been published in Eurographics .
We chose to focus on man-made objects to tackle the topic of shape assemblies. This is two-folds since CAD models of virtual industrial prototypes provide an excellent, real-size test-bed for our methods. Moreover, this is perfectly fitting the demand from industrial partners such as EDF and Airbus Group. On a complementary basis, we have initiated a partnership with UCL (University College London) to address function-preserving assembly deformation.
Assemblies representing products are most often reduced to a collection of independent CAD models representing each component. To our knowledge, there has been no approach proposed to generate CAD assembly models from 3D scans. An approach is initiated with a partnership with LIRIS (R. Chaine and J. Digne) and EDF in the framework of a Rhône-Alpes region project (Potasse) starting with the PhD of P. Coves.
Following the work of ,, , partnership with Inria GRAPHIK team (F. Ulliana) has been set up and a deductive logic framework has been coupled to the SALOME plateform with the insertion of an ontology describing a subset of a product structure. This partnership is developed with the internship of H. Vilmart to evolve toward an intrinsic, knowledge-based representation of a product structure that takes into account the isometries of components using our prior work about symmetry analyses . The description of components through this product structure aims at supporting the generation of CAD assembly models from 3D scans to be able to derive functionally meaningful constraints of relative positions of components extracted from scans.
In the scope of the ERC Expressive, a partnership has been set up with N. Mitra (UCL) with the starting PhD of R. Roussel addressing function-preserving assembly deformation.
Scientist in charge: François Faure.
Other permanent researchers: Marie-Paule Cani, Damien Rohmer, Rémi Ronfard.
Animating objects in real-time is mandatory to enable user interaction during motion design. Physically-based models, an excellent paradigm for generating motions that a human user would expect, tend to lack efficiency for complex shapes due to their use of low-level geometry (such as fine meshes). Our goal is therefore two-folds: first, develop efficient physically-based models and collision processing methods for arbitrary passive objects, by decoupling deformations from the possibly complex, geometric representation; second, study the combination of animation models with geometric responsive shapes, enabling the animation of complex constrained shapes in real-time. The last goal is to start developing coarse to fine animation models for virtual creatures, towards easier authoring of character animation for our work on narrative design.
We keep on improving fundamental tools in physical simulation, such as new insight on constrained dynamics at Siggraph. This allows more stable simulations of thin inextensible objects. A new extension of our volumetric contact approach (Siggraph 2010 and 2012) has been proposed to apply rotational reaction to contact according to the shape of the contact area.
We have proposed an original approach to multi-resolution simulation, in which arbitrary deformation fields at different scales can be combined in a physically sound way. This contrasts with the refinement of a given technique, such as hierarchical splines or adaptive meshes.
Following the success of frame-based elastic models (Siggraph 2011), a real-time animation framework provided in SOFA and currently used in many of our applications with external partners, we proposed an extension to the cutting of surface objects , in collaboration with Berkeley, where Pierre-Luc Manteaux spent 4 months at the end of 2014.
Extending our results on animating paper crumpling, we proposed to synthesise the sound associated to paper material. We proposed a real time model dedicated to paper tearing. In this work, we model the specific case when two hands are tearing a flat sheet of paper on a table, in this case we synthesise procedurally the geometrical deformation of the sheet using conical surface, the tearing using a procedural noise map, and the tearing sound as a modified white noise depending on the speed of action. This work has been published in Motion in Games conference .
We are also developing a sound synthesis method for paper crumpling. The geometrical surface deformation is analysed to drive a procedurally synthesized friction sound and a data driven crumpling sound. We are currently developing this work and did a first communication to AFIG conference .
A real-time spine simulation model leveraging the multi-model capabilities of SOFA was presented in an international conference on biomechanics . We also used a biomechanical model to regularize real-time motion capture and display, and performed live demos at the Emerging Technologies show of Siggraph Asia, Kobe, Japan .
We are developing an ontology-based virtual human embryo development model. In one side, a dedicated ontology stores the anatomical knowledge about organs' geometry, relations, and development rules. On the other side, we synthesize an animated visual 3D model using the informations of the ontology. This work can be seen as a first step toward interactive development anatomy teaching, or simulation, based on an ontology storing existing medical knowledge. This work has been published in the Journal of Biomedical Semantics .
Scientist in charge: Rémi Ronfard.
Other permanent researchers: Marie-Paule Cani, François Faure, Jean-Claude Léon, Olivier Palombi.
Our long term goal is to develop high-level models helping users to express and convey their own narrative content (from fiction stories to more practical educational or demonstrative scenarios). Before being able to specify the narration, a first step is to define models able to express some a priori knowledge on the background scene and on the object(s) or character(s) of interest. Our first goal is to develop 3D ontologies able to express such knowledge. The second goal is to define a representation for narration, to be used in future storyboarding frameworks and virtual direction tools. Our last goal is to develop high-level models for virtual cinematography such as rule-based cameras able to automatically follow the ongoing action and semi-automatic editing tools enabling to easily convey the narration via a movie.
During the third year of Adela Barbulescu's PhD thesis, we proposed a solution for converting a neutral speech animation of a virtual actor (talking head) to an expressive animation. Using a database of expressive audiovisual speech recordings, we learned generative models of audiovisual prosody for 16 dramatic attitudes (seductive, hesitant, jealous, scandalized, etc.) and proposed methods for transferring them to novel examples. Our results demonstrate that the parameters which describe an expressive performance present person-specific signatures and can be generated using spatio-temporal trajectories; parameters such as voice spectrum can be obtained at frame-level, while voice pitch, eyebrow raising or head movement depend both on the frame and the temporal position at phrase-level. This work was presented at the first joint conference on facial animation and audio-visual speech processing and in a live demo at the EXPERIMENTA exhibition in Grenoble, and was seen by 1200 visitors.
During the third year of Quentin Galvane's Phd thesis, we proposed a solution for planning complex camera trajectories in crowded animation scenes Galvane . This work was done in a collaboration with Marc Christie in Rennes.
We also published new results from Vineet Gandhi's PhD thesis (defended in 2014) on the generation of cinematographic rushes from single-view recordings of theatre performances . In that paper, we demonstrate how to use our algorithms to generate a large range of dynamic shot compositions from a single static view, a process which we call "vertical editing". Our patent application on this topic was reviewed positively and is being extended.
Those techniques were used to automatically generated cinematographically pleasant rushes from a monitor camera during rehearsals at Theatre des Celestins, as part of ANR project "Spectacle-en-Lignes". Results of the projects are described in two papers , Steiner and we presented them to a professional audience during the Avignon theatre festival. This work was done in a collaboration with the Institut de Recherche et d'Innovation (IRI) at Centre Pompidou and the SYLEX team at LIRIS.
We proposed a new computational model for film editing at the AAAI artificial intelligence conference, which is based on semi-Makov chains . Our model significantly extends previous work by explicitly taking into account the crucial aspect of timing (pacing) in film editing. Our proposal is illustrated with a reconstruction of a famous scene of the movie "Back to the future" in 3D animation, and a comparison of our automatic film editing algorithms with the director's version. Results are further discussed in two companion papers , Galvane . This work was done in a collaboration with Marc Christie in Rennes. Future work is being planned to extend this important work to the case of live-action video (as described in the previous section) and to generalize for the case non-linear film editing including temporal ellipses and flashbacks.
Scientist in charge: Jean-Claude Léon.
Other permanent researchers: Marie-Paule Cani, Olivier Palombi, Damien Rohmer, Rémi Ronfard.
The challenge is to develop more effective ways to put the user in the loop during content authoring. We generally rely on sketching techniques for quickly drafting new content, and on sculpting methods (in the sense of gesture-driven, continuous distortion) for further 3D content refinement and editing. The objective is to extend these expressive modeling techniques to general content, from complex shapes and assemblies to animated content. As a complement, we are exploring the use of various 2D or 3D input devices to ease interactive 3D content creation.
Sculpting paradigm has been successfully applied to deform simple smooth surfaces. More complex objects representing virtual characters or real-life objects are however modeled as hierarchy of shapes with elements, sub-elements and details. Applying sculpting deformation to such objects is challenging as every parts of the hierarchy should stay coherent through the deformation.
When an object can be represented as a smooth underlying surface and a set of singular details, we proposed a real-time deformation approach enabling to freely stretch or squeeze the 3D object while continuously maintaining the details' appearance. Instead of stretching or squeezing the details the same way than the smooth underlying surface, we duplicate or merge them smoothly while ensuring that the distribution of details has the same caracteristic than the original one. We published this work in Shape Modeling International .
In the case of more general object hierarchies, we are developing a new methodology to apply generic deformation into complex assemblies while preserving their properties in extending the shape grammar approach into our new deformation grammar. We presented our preliminary results as a communication in the GTMG conference .
Modeling virtual worlds is particular challenging: the fractal-like distribution of details in terrain shapes makes them easy to identify, but very difficult to design using standard modeling software, even for expert users. Moreover, virtual worlds involve distributions of different categories of contents over terrains, such as vegetation, houses, roads or rivers. Efficiently modeling these sets of elements, which are statistically correlated, is indeed a challenge.
This year, our contributions to tackle these issues were two-folds:
Firstly, we investigated the use of a plate tectonics metaphor to generate plausible terrains from a simple vector map representing the location of the main rivers and mountain picks. The method uses a Voronoi tesselation of pick locations to automatically generate tectonic plates which themselves drive terrain folds. Hydraulic erosion is then used to further sculpt the terrain and add details, while the specified rivers are considered to maintain consistency with the input map. This work was published in . A more accurate modeling of large scale fluvial erosion and plates tectonics phenomena was investigated in Guillaume Cordonnier's master thesis and is the object of his PhD, which started in October 2015.
Secondly, we proposed a paint-based interface to tackle the problem of easily populating a terrain with distributions of objects (trees, rocks, grass, houses, etc) or of graph-like structures such as rivers and roads. The key point of our solution is to learn statistics about distributions of elements and their correlation with other distributions, with graph structures, or with terrain slope, and store the resulting histograms as "colors" in a palette interface. After creating a few local distribution manually, the user selects them with a pipette tool, and is able to reuse them with a brush. We also provided a gradient tool to interpolate between two such "colors" an move tool enabling, for instance to move groups of trees and rocks over a terrain while maintaining the adequate correlation with local slope, and a deformation interface based on seam carving enabling to seamlessly stretch or compress a region of virtual world. This work, a collaboration between Arnaud Emilien when defended his phD in December 2014, Ulysse Vimont, Marie-Paule Cani, and Bedrich Benes from Purdue University, was published at Siggraph 2015 .
Sketching and sculpting methods were restricted so far to the design of static shapes. One of our research goals has been to extend these interaction metaphors to motion design. This year, this included three specific contributions.
Firstly, to handle sketch-based representation of motion in the 2D case, we extended the static vector graphics complex data structure, which we had introduced at Siggraph last year, to vector graphics animations with time-varying topology . This second paper was presented at Siggraph again this year. The proposed data structure is augmented with a rich set of editing operations, which can be used to quickly interpolate 2D drawings with different topologies. This work was done within a collaboration with Boris Dalstein and Michiel van de Panne from UBC, Canada.
Secondly, following a first method enabling to sculpt crowd animations (Jordao, Eurographics 2014), we developed a painting interface enabling to specify both density and main directions of motion in an animated crowd. The resulting system is still based on crowd-patches, i.e. the crowd motion is an assembly of local trajectories defined in interconnected patches. Our new painting system, called Crowd-Art, uses discrete changes in loop trajectories to evolve the number of in/out constraints in each patch until the requested density and directions are best matched. See . This concluded Kevin Jordao's PhD thesis, co-advided by Julien Pettre from the MimeTIC team and in collaboration with Marc Christie, defended in December 2015.
Lastly, we developed the first expressive interface to interactively sketch and progressively sculpt and refine character motion. Our solution is based on a space-time sketching metaphor: The user sketches a single space-time stroke, which is used to initialize a series of dynamic lines of action, serving as intermediates to animate the character's model. Motion and shape deformation can be immediately replayed from this single stroke, since it sets at the same time shape, trajectory and speed (defined from the drawing speed). Thanks to visual feedback, the user can easily refine the resulting motion by editing specific lines of actions at fixed times, or by composing several motions together. This work, published at Siggraph, is one of the first methods enabling arbitrary motion to be defined from scratch by a beginner . Together to another work enabling to add dynamics to character motion , this concluded Martin Guay's PhD thesis, defended in June 2015.
We received a doctoral grant from LABEX PERSYVAL, as part of the research program on authoring augmented reality (AAR) for PhD student Adela Barbelescu. Her thesis is entitled directing virtual actors by imitation and mutual interaction - technological and cognitive challenges. Her advisors are Rémi Ronfard and Gérard Bailly (GIPSA-LAB).
Additionally, this project funds the PhD thesis of Armelle Bauer which has started in October, co-advised by François Faure, Olivier Palombi, and Jocelyne Troccaz from TIMC-GMCAO. The goal is to tackle the scientific challenges of visualizing one's self anatomy in motion using Augmented Reality techniques.
We received a doctoral grant (AdR) from the ARC6 program to generate functional CAD assemblies from scanned data (PoTAsse: POint clouds To ASSEmblies) as a collaboration between Imagine team (LJK/Inria) and Geomod team (LIRIS). Our PhD student Pablo Coves is advised by Jean-Claude Léon and Damien Rohmer at Imagine, Raphaëlle Chaine and Julie Digne in Geomod team.
This 3-year contract with two industrial partners: TeamTo and Mercenaries Engineering (software for production rendering), is a follow-up and a generalization of Dynam'it. The goal is to propose an integrated software for the animation and final rendering of high-quality movies, as an alternative to the ever-ageing Maya. It will include dynamics similarly to Dynam'it, as well as innovative sketch-based kinematic animation techniques invented a Imagine by Martin Guay and Rémi Ronfard. This contract, started in October, funds 2 engineers for 3 years.
Chrome is a national project funded by the French Research Agency (ANR). The project is coordinated by Julien Pettré, member of MimeTIC. Partners are: Inria-Grenoble IMAGINE team (Remi Ronfard), Golaem SAS (Stephane Donikian), and Archivideo (Francois Gruson). The project has been launched in september 2012. The Chrome project develops new and original techniques to massively populate huge environments. The key idea is to base our approach on the crowd patch paradigm that enables populating environments from sets of pre-computed portions of crowd animation. These portions undergo specific conditions to be assembled into large scenes. The question of visual exploration of these complex scenes is also raised in the project. We develop original camera control techniques to explore the most relevant part of the animations without suffering occlusions due to the constantly moving content. A long-term goal of the project is to enable populating a large digital mockup of the whole France (Territoire 3D, provided by Archivideo). Dedicated efficient human animation techniques are required (Golaem). A strong originality of the project is to address the problem of crowded scene visualisation through the scope of virtual camera control, as task which is coordinated by Imagine team-member Rémi Ronfard.
Three phd students are funded by the project. Kevin Jordao is working on interactive design and animation of digital populations and crowds for very large environments. His advisors are Julien Pettré and Marie-Paule Cani. Quentin Galvanne is working on automatic creation of virtual animation in crowded environments. His advisors are Rémi Ronfard and March Christie (Mimetic team, Inria Bretagne). Julien Pettre. Chen-Kin Lim is working on crowd simulation and rendering of the behaviours of various populations using crowd patches. Her advisors are Rémi Ronfard and March Christie (Mimetic team, Inria Bretagne). Julien Pettre.
3-year collaboration with Inria teams Virtual Plants and Demar, as well as INRA (Agricultural research) and the Physics department of ENS Lyon. The goal is to better understand the coupling of genes and mechanical constraints in the morphogenesis (creation of shape) of plants. Our contribution is to create mechanical models of vegetal cells based on microscopy images. This project funds the Ph.D. thesis of Richard Malgat, who started in October, co-advised by François Faure (IMAGINE) and Arezki Boudaoud (ENS Lyon).
Title: EXPloring REsponsive Shapes for Seamless desIgn of Virtual Environments.
Programm: ERC Advanced Grant
Duration: 04/2012 - 03/2017
Inria contact: Marie-Paule Cani
To make expressive and creative design possible in virtual environments, the goal is to totally move away from conventional 3D techniques, where sophisticated interfaces are used to edit the degrees of freedom of pre-existing geometric or physical models: this paradigm has failed, since even trained digital artists still create on traditional media and only use the computer to reproduce already designed content. To allow creative design in virtual environments, from early draft to progressive refinement and finalization of an idea, both interaction tools and models for shape and motion need to be revisited from a user-centred perspective. The challenge is to develop reactive 3D shapes – a new paradigm for high-level, animated 3D content – that will take form, refine, move and deform based on user intent, expressed through intuitive interaction gestures inserted in a user-knowledge context. Anchored in Computer Graphics, this work reaches the frontier of other domains, from Geometry, Conceptual Design and Simulation to Human Computer Interaction.
Title: Position and Personalize Advanced Human Body Models for Injury Prediction
Programm: FP7
Duration: November 2013 - April 2017
Inria contact: F. Faure
In passive safety, human variability is currently difficult to account for using crash test dummies and regulatory procedures. However, vulnerable populations such as children and elderly need to be considered in the design of safety systems in order to further reduce the fatalities by protecting all users and not only so called averages. Based on the finite element method, advanced Human Body Models for injury prediction have the potential to represent the population variability and to provide more accurate injurypredictionsthan alternatives using global injury criteria. However, these advanced HBM are underutilized in industrial R&D. Reasons include difficulties to position the models – which are typically only available in one posture – in actual vehicle environments, and the lack of model families to represent the population variability (which reduces their interestwhen compared to dummies). The main objective of the project will be to develop new tools to position and personalize these advanced HBM. Specifications will be agreed upon with future industrial users, and an extensive evaluation in actual applications will take place during the project. The tools will be made available by using an Open Source exploitation strategy and extensive dissemination driven by the industrial partners.Proven approaches will be combined with innovative solutions transferred from computer graphics, statistical shape and ergonomicsmodeling. The consortium will be balanced between industrial users (with seven European car manufacturers represented), academic users involved ininjury biomechanics, and partners with different expertise with strong potential for transfer of knowledge. By facilitating the generation of population and subject-specific HBM and their usage in production environments, the tools will enable new applications in industrial R&D for the design of restraint systems as well as new research applications.
Laurent Grisoni (Univ. Lille / Inria): An HCI view of sketch-based interaction. (12/11/2015).
Philippe Guillotel (Technicolor), Arnav Jhala (Univ. of California Santa Cruz), Mateu Sbert (University of Girona), Karan Singh (University of Toronto), participated to the Expressive Cinematography seminar (26/10/2015).
Adrien Bousseau (Inria Sofia Antipolis): Computer Drawing Tools for Assisting Learners, Hobbyists, and Professionals (01/10/2015).
Ludovic Hoyet (Inria Rennes), Perception of Biological Human Motion: Towards New Perception-Driven Virtual Character Simulations (10/09/2015).
Michiel van de Panne (University of British Columbia), Animation Potpourri: New Models for Animated Vector Graphics, Motion Optimization, and Data-driven Animation (03/07/2015).
Henri Gouraud, Histoire de l'ombrage de Gouraud (05/06/2015).
Jean-Michel Dischler (Univ. Strasbourg), Procedural texturing from Example (28/05/2015).
Paul Kry (MacGill University), Balancing Speed and Fidelity in Physics Based Animation and Control (23/04/2015).
Rémi Ronfard was co-chair of the second Eurographics workshop on « Intelligent Cinematography and Editing » (WICED), Lisbon, Protugal, May 2016 (with Marc Christie, Inria/MIMETIC and Quentin Galvane, Technicolor).
Co-chair of Computational Model of Narrative (CMN 2016) conference, Crakow, Poland, july 2016, with Mark Finlayson (Florida International University), Antonio Lieto (University of Torino) and Ben Miller (Georgia State University).
Damien Rohmer was member of the organization committee for the Journées Informatiques et Géométrie (JIG) in November in Paris.
Rémi Ronfard was member of the organization committee for the Workshop on Intelligent Narrative Technologies (INT), and International Conference on 3D Imaging (IC3D).
Stefanie Hahmann is the program co-chair of the next SPM 2916 conference in Berlin.
Marie-Paule Cani was in the International Program Committee for SMI 2015, Expressive 2015, MIG 2015, and is preparing for EUROGRAPHICS 2016.
Stefanie Hahmann served in the International Program Committee for GD/SPM’15 SIAM Conf. on Geometric and Physical Modeling, Salt Lake City, CGI’15 Computer Graphics International, Strasbourg and EUROGRAPHICS 2016.
Damien Rohmer was in the International Program Committee for EUROGRAPHICS Short paper, ACM SIGGRAPH Asia Technical Briefs and Posters. He was also member of the jury of the best paper for AFIG-EGFR.
Damien Rohmer was reviewer for ACM SIGGRAPH and EUROGRAPHICS.
Rémi Ronfard was reviewer for CVPR, CHI, and SIGGRAPH.
Stefanie Hahmann is associate Editor of the Journal Computers and Graphics (Elsevier) since 2015 and co-editor of a special edition of the journal Computer-Aided Design (CAD) on Geometric Modeling.
Damien Rohmer was reviewer for ACM Transactions on Graphics, Computer Graphics Forum, and REFIG.
In addition to holding the chair Informatics and Computational sciences at College de France, and giving 9 lectures there, Marie-Paule Cani gave the following invited talks :
Stefanie Hahmann gave an invited talk at Universität Linz, Autriche, June 2015
ENS Lyon (Avril 2015),
Seminar Jeudis de l’Imaginaire, Télécom Paris-Tech (Avril 2015),
Seminar of the ICUBE laboratory, Strasbourg (Octobre 2015)
Seminar of the DI, ENS Paris, (October 2015)
Two invited conferences organized by the French Embassy in Tunisia, one open to the general public, and a specialized one for student and faculty members.
Rémi Ronfard gave the following invited talk
Variations sur La Ronde with Adéla Barbulescu, Gérard Bailly, Georges Gagneré. Biennale Arts-Sciences, EXPERIMENTA, Grenoble, October 2015.
Mettre en scène les mondes virtuels, Séminaire Création et digital dans le spectacle vivant, Relais Culture Europe, Festival International de Théâtre, Avignon, July 2015.
Mettre en scène les mondes virtuels, Journées scientifiques Inria, June 2015.
Marie-Paule Cani was
At international level
Vice chair of the Eurographics association.
Chair of the steering commmittee of the Eurographics conference
Member of the awards committee of Eurographics
Member of the steering committees of the Expressive symposium and of the SMI conference
And in France
Member of the scientifique council of the institute INSII of CNRS
Member of the scientifique council of the ESIEE engineering school Member of the executive board of the GDR IGVR of CNRS
Elected member of the executive board of EG-France
Stefanie Hahmann was
Head of the french working group GT Modelisation Geometrique part of the GDR IGRV and GDR IM of CNRS.
Member of the executive board of the GDR IGVR of CNRS.
Member of the steering committee of annual international SPM conferences.
Elected member of the international Executive Committee of the SMA – Solid Modeling Association since 2013.
Rémi Ronfard is
Elected to the board of the French Chapter of Eurographics in 2015.
Elected to the board of the French association for Computer Graphics (AFIG) in 2015.
Part of the InriaRT open network of Inria researchers and teams working with IRCAM on Art/science projects, which also includes Arshia Cont (HYBRID), Laurent Grisoni (MINT), Pierre-Yves Oudeyer (FLOWERS) and Martin Hachet (POTIOC). The goal of the InriaRT network is to extend IRCAM's expertise in art & science beyond music, and to showcase Inria research work in art projects featured by IRCAM. A coordinated project for an interactive opera is currently being planned for production in 2017-2018.
Marie-Paule Cani was reviewer of an ERC project.
Stefanie Hahmann is member of the Advisory Board for the European Marie-Curie Training Network ARCADES, 2014-2018.
Rémi Ronfard is a scientific expert for the European Commision, and was a reviewer for the final review of EC funded project IMP-ART (December 2015).
Marie-Paule Cani is at the head of the Imagine project-team.
Rémi Ronfard is
Director of the « Geometry and Image » department of Laboratoire Jean Kuntzman, UMR CNRS 5224 , Grenoble University.
Project leader at Inria for ANR funded project SPECTACLENLIGNES (2013-2015) : The project ended in December 2014 and the main results were presented publicly at Centre Pompidou, Paris in February 2015 (see http://
Project leader at Inria for national « investing in the future » project ACTION 3DS (2011-2014) : The project ended in September 2015 and the final review took place in October 2015 See http://
Project leader at Inria for ANR-funded project CHROME (2012-2015) : The project ended in October 2015.
Damien Rohmer was elected as a member of the Conseil d'Administration of the Association Francaise d'Informatique Graphique (AFIG).
Marie-Paule Cani is responsible for two courses at Ensimag/Grenoble-INP: 3D Graphics (a course that attracts about 80 master 1 students per year) and IRL (Introduction à la recherche en laboratoire), a course enabling engineering students to work on a personal project in a reseach lab during one semester, to get an idea of what academic research is.
Stefanie Hahmann is co-responsible of the MMIS (Images and Applied Maths) (http://
Olivier Palombi is responsible of the French Campus numérique of anatomy.
Olivier Palombi is responsible and national leader of the project SIDES (http://
François Faure was responsible of the GVR-(Graphics, Vision and Robotic) programm in the MOSIG Master.
Damien Rohmer is coordinator of the Math, Signal, Image program at the engineering school CPE Lyon in supervising the scientific and technical content of the program. He is also co-responsible of the Image, Modeling & Computing specialization program attracting 35 students per year. He gives, and is responsible, for of one Computer Science class (130 student, 3rd year Bachelor level), an introductory Computer Graphics class (110 students, Master 1st year), and 5 specialization classes on C++ programming, OpenGL programming, 3D modeling, animation and rendering (35 students, Master 1st and 2nd year). Damien Rohmer also gave an invited course in Collegium Da Vinci, Poznan University on Computer Graphics (15 M1 students in Computer Game Programm) and a formation on the Python langage to high school teachers starting the Informatique et Sciences du Numérique program.
Rémi Ronfard taught courses in Advanced 3D animation, M2R MOSIG, University of Grenoble (12 hours) on advanced concepts in 3D character animation; Game Engine Programming, M2R IMAGINA, University of Montpellier (36 hours); Computer Vision Analysis of Actors and Actions in the I2S Doctoral School of Montpellier (3h). Remi Ronfard also created a new introductory course in narratology for doctoral students in computer science: Computational modeling of narrative texts, movies and games, Grenoble University.
Note that MOSIG is joint master program between University Joseph Fourier (UJF) and Institut Polytechnique de Grenoble (INPG) taught in English since it hosts a number of internal students. It belongs to the doctoral school MSTII.
Most of the members of our team are Professor or Assistant Professor in University where the common teaching load is about 200h per year. Rémi Ronfard who is only researcher usually perform some teaching in vacations.
PhD : Quentin Galvane, Automatic Cinematography and Editing in Virtual Environments, Grenoble Université, October 2015, Rémi Ronfard, Marc Christie (IRISA/Inria).
PhD : Adela Barbulescu, Directing virtual actors by interaction and mutual imitation, Grenoble Université, November 2015, Rémi Ronfard, Gérard Bailly (Gipsa-Lab).
PhD : Rémi Brouet, Multi-touch gesture interactions and deformable geometry for 3D edition on touch screen, Grenoble Université, March 2015, Marie-Paule Cani, Renaud Blanch (LIG).
PhD : Sergi Pujades Rocamora, Camera models and algorithms for creating stereoscopic 3-D video, Grenoble Université, March 2015, Rémi Ronfard, Frédéric Devernay (PRIMA).
PhD : Martin Guay, Sketching 3D Character Animation, Grenoble University, June 2015, Marie-Paule Cani, Rémi Ronfard.
PhD : Kevin Jordao, Crowd Sculpting, Rennes University, December 2015, Marie-Paule Cani, Julien Pettré (Inria Rennes).
PhD : Léo Allemand-Giorgis, Grenoble University,Reconstruction de surfaces à partir de complexes de Morse-Smale, S. Hahmann, GP. Bonneau
PhD : Richard Malgat, Computational modeling of internal mechanical properties of plant cells, Grenoble University, September 2015, Francois Faure, Arezki Boudaoud (RDP).
PhD in progress : Armelle Bauer, the Living Book of Anatomy, UJF, Francois Faure, Olivier Palombi.
PhD in progress : Guillaume Cordonnier, Design of Mountains, Grenoble University, Marie-Paule Cani, Eric Galin (LIRIS).
PhD in progress : Pablo Coves, From Point Cloud Data to Functional CAD Model, INP Grenoble, Jean-Claude Léon, Damien Rohmer, Raphaëlle Chaine (LIRIS).
PhD in progress : Even Entem, Animal sketching, INP Grenoble, Marie-Paule Cani,
PhD in progress : Geoffrey Guingo, Animated Textures, INP Grenoble, Marie-Paule Cani, Jean-Michel Dischler (ICUBE).
PhD in progress : Pierre-Luc Manteaux, Frame Based Simulation, Grenoble University, Francois Faure, Marie-Paule Cani.
PhD in progress : Aarohi Singh Johal, CAD construction graph generation from B-Rep model, Grenoble University and EDF, Jean-Claude Léon, Raphel Marc (EDF).
PhD in progress : Camille Schreck, Modélisation et déformation de formes actives, Grenoble University, Stefanie Hahmann, Damien Rohmer.
PhD in progress : Robin Roussel, Function-preserving editing, Grenoble University, Marie-Paule Cani, Jean-Claude Léon, Niloy Mitra.
PhD in progress : Tibor Stanko, Modélisation de surfaces lisses maillées à partir de capteurs inertiels, Grenoble University and CEA, Stefanie Hahmann, Georges-Pierre Bonneau (LJK/Inria-Morpheo), collaboration with Nathalie Seguin (CEA-Leti).
PhD in progress : Ulysse Vimont, Complex scene sculpture, Grenoble University, Marie-Paule Cani, Damien Rohmer.
The images of the flat torus rendered by Damien Rohmer within the collaboration with the HEVEA team were presented during the mathematical Abel Price ceremony in favor of John Nash.
Stefanie Hahmann is responsible of specialité Maths-Info at the Grenoble Graduate School of Computer Science and Mathematics MSTII (with about 400 PhD students registered)