Morpheo's main objective is the ability to perceive and to interpret moving shapes using multiple camera systems for the analysis of animal motion, animation synthesis and immersive and interactive environments. Multiple camera systems allow dense information on both shapes and their motion to be recovered from visual cues. Such ability to perceive shapes in motion brings a rich domain for research investigations on how to model, understand and animate real dynamic shapes. In order to reach this objective, several scientific and technological challenges must be faced:
A first challenge is to be able to recover shape information from videos. Multiple camera setups allow to acquire shapes as well as their appearances with a reasonable level of precision. However most effective current approaches estimate static 3D shapes and the recovery of temporal information, such as motion, remains a challenging task. Another challenge in the acquisition process is the ability to handle heterogeneous sensors with different modalities as available nowadays: color cameras, time of flight cameras, stereo cameras and structured light scanners, etc.
A second challenge is the analysis of shapes. Few tools have been proposed for that purpose and recovering the intrinsic nature of shapes is an actual and active research domain. Of particular interest is the study of animal shapes and of their associated articulated structures. An important task is to automatically infer such properties from temporal sequences of 3D models as obtained with the previously mentioned acquisition systems. Another task is to build models for classes of shapes, such as animal species, that allow for both shape and pose variations.
A third challenge concerns the analysis of the motion of shapes that move and evolve, typically humans. This has been an area of interest for decades and the challenging innovation is to consider for this purpose dense motion fields, obtained from temporally consistent 3D models, instead of traditional sparse point trajectories obtained by tracking particular features on shapes, e.g. motion capture systems. The interest is to provide full information on both motions and shapes and the ability to correlate these information.The main tasks that arise in this context are first to find relevant indices to describe the dynamic evolutions of shapes and second to build compact representations for classes of movements.
A fourth challenge tackled by Morpheo is immersive and interactive systems. Such systems rely on real time modeling, either for shapes, motion or actions. Most methods of shape and motion retrieval turn out to be fairly complex, and quickly topple hardware processing or bandwidth limitations, even with a limited number of cameras. Achieving interactivity thus calls for scalable methods and research of specific distribution and parallelization strategies.
The Morpheo team was created in March 2011.
The ANR project Morpho is coordinated by Morpheo. This project is aimed at designing new technologies for the measure and for the analysis of dynamic surface evolutions using visual data. 3 academic partners will collaborate on this project: the INRIA Grenoble Rhône-Alpes with the Morpheo team, the GIPSA-lab Grenoble and the INRIA-Lorraine with the Alice team.
A collaboration between Morpheo and Techniolor has been initiated in 2011. The objective is to develop new gesture interfaces using visual inputs such as color and depth cameras. A co-supervised PhD started in May 2011 and technicolor and Morpheo are also collaborating on this subject within the Quaero project.
Recovering shapes from images is a fundamental task in computer vision. Applications are numerous and include, in particular, 3D modeling applications and mixed reality applications where real shapes are mixed with virtual environments. The problem faced here is to recover shape information such as surfaces from image information. A tremendous research effort has been made in the past to solve this problem in the static case and a number of solutions had been proposed. However, a fundamental issue still to be addressed is the recovery of full shape models with possibly evolving topologies using time sequence information. The main difficulties are precision, robustness of computed shapes as well as consistency of these models over time. Additional difficulties include the integration of multi-modality sensors as well as real-time applications.
Acquisition of 4D Models can often be conveniently formulated as a Bayesian estimation or learning problem. Various generative and graphical models can be proposed for the problems of occupancy estimation, 3D surface tracking in a time sequence, and motion segmentation. The idea of these generative models is to predict the noisy measurements (e.g. pixel values, measured 3D points or speed quantities) from a set of parameters describing the unobserved scene state, which in turn can be estimated using Bayes' rule to solve the inverse problem. The advantages of this type of modeling are numerous, as they enable to model the noisy relationships between observed and unknown quantities specific to the problem, deal with outliers, and allow to efficiently account for various types of priors about the scene and its semantics. Sensor models for different modalities can also easily be seamlessly integrated and jointly used, which remains central to our goals.
Since the acquisition problems often involve a large number of variables, a key challenge is to exhibit models which correctly account for the observed phenoma, while keeping reasonable estimation times, sometimes with a real-time objective. Maximum likelihood / maximum a posteriori estimation and approximate inference techniques, such as Expectation Maximization, Variational Bayesian inference, or Belief Propagation, are useful tools to keep the estimation tractable. While 3D acquisition has been extensively explored, the research community faces many open challenges in how to model and specify more efficient priors for 4D acquisition and temporal evolution.
Spectral geometry processing consists of designing methods to process and transform geometric objects that operate in frequency space. This is similar to what is done in signal processing and image processing where signals are transposed into an alternative frequency space. The main interest is that a 3D shape is mapped into a spectral space in a pose-independent way. In other words, if the deformations undergone by the shape are metric preserving, all the meshes are mapped to a similar place in spectral space. Recovering the coherence between shapes is then simplified, and the spectral space acts as a “common language” for all shapes that facilitates the computation of a one-to-one mapping between pairs of meshes and hence their comparisons. However, several difficulties arise when trying to develop a spectral processing framework. The main difficulty is to define a spectral function basis on a domain which is a 2D (resp. 3D for moving objects) manifold embedded in 3D (resp. 4D) space and thus has an arbitrary topology and a possibly complicated geometry.
Recovering the temporal evolution of a deformable surface is a fundamental task in computer vision, with a large variety of applications ranging from the motion capture of articulated shapes, such as human bodies, to the deformation of complex surfaces such as clothes. Methods that solve for this problem usually infer surface evolutions from motion or geometric cues. This information can be provided by motion capture systems or one of the numerous available static 3D acquisition modalities. In this inference, methods are faced with the challenging estimation of the time-consistent deformation of a surface from cues that can be sparse and noisy. Such an estimation is an ill posed problem that requires prior knowledge on the deformation to be introduced in order to limit the range of possible solutions.
The goal of motion analysis is to understand the movement in terms of movement coordination and corresponding neuromotor and biomechanical principles. Most existing tools for motion analysis consider as input rotational parameters obtained through an articulated body model, e.g. a skeleton; Such model being tracked using markers or estimated from shape information. Articulated motion is then traditionally represented by trajectories of rotational data, each rotation in space being associated to the orientation of one limb segment in the body model. This offers a high dimensional parameterization of all possible poses. Typically, using a standard set of articulated segments for a 3D skeleton, this parameterization offers a number of degrees of freedom (DOF) that ranges from 30 to 40. However, it is well known that for a given motion performance, the trajectories of these DOF span a much reduced space. Manifold learning techniques on rotational data have proven their relevance to represent various motions into subspaces of high-level parameters. However, rotational data encode motion information only, independently of morphology, thus hiding the influence of shapes over motion parameters. One of the objectives is to investigate how motions of human and animal bodies, i.e. dense surface data, span manifolds in higher dimensional spaces and how these manifolds can be characterized. The main motivation is to propose morpho-dynamic indices of motion that account for both shape and motion. Dimensionality reduction will be applied on these data and used to characterize the manifolds associated to human motions. To this purpose, the raw mesh structure cannot be statistically processed directly and appropriate features extraction as well as innovative multidimensional methods must be investigated.
Modeling shapes that evolve over time, analyzing and interpreting their motion has been a subject of increasing interest of many research communities including the computer vision, the computer graphics and the medical imaging communities. Recent evolutions in acquisition technologies including 3D depth cameras (Time-of-Flight and Kinect), multi-camera systems, marker based motion capture systems, ultrasound and CT scans have made those communities consider capturing the real scene and their dynamics, create 4D spatio-temporal models, analyze and interpret them. A number of applications including dense motion capture, dynamic shape modeling and animation, temporally consistent 3D reconstruction, motion analyzes and interpretation have therefore emerged.
Most existing shape analysis tools are local, in the sense that they give local insight about an object's geometry or purpose. The use of both geometry and motion cues makes it possible to recover more global information, in order to get extensive knowledge about a shape. For instance, motion can help to decompose a 3D model of a character into semantically significant parts, such as legs, arms, torso and head. Possible applications of such high-level shape understanding include accurate feature computation, comparison between models to detect defects or medical pathologies, and the design of new biometric models or new anthropometric datasets.
The recovery of dense motion information enables the combined analyses of shapes and their motions. Typical examples include the estimation of mean shapes given a set of 3D models or the identification of abnormal deformations of a shape given its typical evolutions. The interest arises in several application domains where temporal surface deformations need to be captured and analysed. It includes human body analyses for which potential applications are anyway numerous and important, from the identification of pathologies to the design of new prostheses.
The ability to build models of humans in real time allows to develop interactive applications where users interact with virtual worlds. The recent Kinect proposed by Microsoft illustrates this principle with game applications using human inputs perceived with a depth camera. Other examples include gesture interfaces using visual inputs. A challenging issue in this domain is the ability to capture complex scenes in natural environments. Multi-modal visual perception, e.g. depth and color cameras, is one objective in that respect.
The Grimage platform is an experimental multi-camera platform dedicated to spatio-temporal modeling including immersive and interactive applications. It hosts a multiple-camera system connected to a PC cluster, as well as visualization facilities including head mounted displays. This platform is shared by several research groups, most proeminently MOAIS, MORPHEO and PERCEPTION. In particular, Grimage allows challenging real-time immersive applications based on computer vision and interactions between real and virtual objects, Figure .
Vgate is an immersive environment that allows full-body immersion and interaction with virtual worlds. It is a joint initiative of computer scientists from computer vision, parallel computing and computer graphics from several research groups at INRIA Grenoble Rhône-Alpes, and in collaboration with the company 4D View Solutions. The MORPHEO team is leading this project.
This project is a follow-up of the experimental set-up developed for a CNES project with Mathieu Beraneck from the CESeM laboratory (centre for the study of sensorimotor control, CNRS UMR 8194) at the Paris-Descartes University. The goal of this project was to analyze the 3D body postures of mice with various vestibular deficiencies in low gravity condition (3D posturography) during a parabolic flight campaign. The set-up has been now adapted for new experiments on motor-control disorders for other mice models. This experimental platform is currently under development for a broader deployment for high throughput phenotyping with the technology transfer project ETHOMICE. This project involves a closed relationship with the CESeM laboratory and the European Mouse Clinical Institute in Strasbourg (Institut Clinique de la Souris, ICS).
Lucy Viewer
http://
This website hosts dynamic mesh sequences reconstructed from images captured using a multi-camera set up. Such mesh-sequences offer a new promising vision of virtual reality, by capturing real actors and their interactions. The texture information is trivially mapped to the reconstructed geometry, by back-projecting from the images. These sequences can be seen from arbitrary viewing angles as the user navigates in 4D (3D geometry + time) . Different sequences of human / non-human interaction can be browsed and downloaded from the data section. A software to visualize and navigate these sequences is also available for download.
This work is done in collaboration with Carlos Andújar, Pere Brunet and Álvar Vinacua from Universitat Politecnica de Barcelona, Spain, and has been published in the CAD journal . The purpose is to propose an efficient method to create 2-manifold meshes from real data, obtained as soups of polygons with combinatorial, geometrical and topological noise (see Figure ). We propose to use a voxel structure called a discrete membrane and morphological operators to compute possible topologies, between which the user chooses.
This work is a part of the BQR project IDEAL (see Section ) which is performed in collaboration with Leila de Floriani from the University of Genova in Italy. The main goal of this project is to study non-manifold geometrical models and to find out features allowing to classify these models and criteria for determining their shape. We are interested in non-manifold models such as idealized industrial CAD models, since they are still ill-understood even if they are frequently used in computer graphics and many engineering applications.
We have developed an efficient method to compute the homology of a large (non-manifold) simplicial complex, from the homologies of its sub-complexes. Computed topological invariants play a crucial role in the field of shape description and analysis. This work has been published in the CAD journal and presented at the SIAM conference on geometric and physical modeling (GD/SPM'11).
In collaboration with Radu Horaud and Andrei Zaharescu, we developed a novel approach for the scale-space representations of scalar functions defined over Riemannian manifolds. One of the main interest in such representations stems from the task of 3D modelling where 2D surfaces, endowed with various physical properties, are recovered from images. Multi-scale analysis allows to structure the information with respect to its intrinsic scale, hence enabling a wide range of low-level computations, similar to what is usually used for representing images. In contrast to the Euclidean image domain, where scale spaces can be easily obtained through convolutions with Gaussian kernels, surfaces require a more general approach that must handle non-Euclidean spaces. Such a generalized scale-space framework is the main contribution of this work, which builds on the spectral decomposition available with the heat-diffusion framework to derive a computational approach for representing scalar functions on 2D Riemannian manifolds using an intrinsic scale parameter. In addition, we proposed a feature detector and a region descriptor, based on these representations, extending the widely used DOG detector and HOG descriptor to manifolds. Experiments on real datasets with various physical properties, i.e., scalar functions, demonstrated the validity and the interest of this approach .
3D Shape matching is an important problem in computer vision. One of the major difficulties in finding dense correspondences between 3D shapes is related to the topological discrepancies that often arise due to complex kinematic motions. In this work done in collaboration with Jan Cech, Radu Horaud and Avinash Sharma a shape matching method is proposed that is robust to such changes in topology. The algorithm starts from a sparse set of seed matches and outputs dense matching. We use a shape descriptor based on properties of the heat-kernel and which provides an intrinsic scale-space representation. This descriptor incorporates (i) heat-flow from already matched points and (ii) self diffusion. At small scales the descriptor behaves locally and hence it is robust to global changes in topology. Therefore, it can be used to build a vertex-to-vertex matching score conditioned by an initial correspondence set. This score is then used to iteratively add new correspondences based on a novel seed-growing method that iteratively propagates the seed correspondences to nearby vertices. The matching is farther densified via an EM-like method that explores the congruency between the two shape embeddings. The method is compared with two recently proposed algorithms and we show that we can deal with substantial topological differences between the two shapes .
Mesh animations, or sequences of meshes, represent a huge amount of data, especially when acquired from scans or videos. In collaboration with the university of Lyon (LIRIS lab), we address the problem of partitioning these sequences, in order to both recover motion information and be able to compress them. Following last year's method, we proposed this year a second and third motion-based segmentation algorithm, which clusters mesh vertices into static or rigidly moving components (see Figure ). These methods are based on spectral clustering of the vertex transformations and are more robust and general than the previous one. This work has been submitted for publication to a journal, and is part of the PhD thesis of Romain Arcila .
Recovering dense motion information is a fundamental intermediate step in the image processing chain upon which higher level applications can be built, such as tracking or segmentation. For that purpose, pixel observations in the image provide useful motion cues through temporal variations of the intensity function. We have studied the estimation of dense, instantaneous 3D motion fields over non-rigidly moving surface observed by multi-camera systems. The motivation arises from multi-camera applications that require motion information for arbitrary subjects, in order to perform tasks such as surface tracking or segmentation. To this aim, we have proposed a novel framework that allows to efficiently compute dense 3D displacement fields using low level visual cues and geometric constraints. The main contribution is a unified framework that combines flow constraints for small displacements with temporal feature constraints for large displacements and fuses them over the surface using local rigidity constraints. The resulting linear optimization problem allows for variational solutions and fast implementations. Experiments conducted on synthetic and real data demonstrated the respective interests of flow and feature constraints as well as their efficiency to provide robust surface motion cues when combined , .
As an extension of this work, we also studied the situation where a depth camera and one or more color cameras are available, a common situation with recent composite sensors such as the Kinect. In this case, geometric information from depth maps can be combined with intensity variations in color images in order to estimate smooth and dense 3D motion fields. We propose a unified framework for this purpose, that can handle both arbitrary large motions and sub-pixel displacements. The novelty with respect to existing scene flow approaches is that it takes advantage of the geometric information provided by the depth camera to define a surface domain over which photometric constraints can be consistently integrated in 3D. Experiments on real and synthetic data provide both qualitative and quantitative results that demonstrated the interest of the approach .
We present a novel probabilistic framework for rigid tracking and segmentation of shapes observed from multiple cameras. Most existing methods have focused on solving each of these problems individually, segmenting the shape assuming surface registration is solved, or conversely performing surface registration assuming shape segmentation or kinematic structure is known. We assume no prior kinematic or registration knowledge except for an over-estimate k of the number of rigidities in the scene, instead proposing to simultaneously discover, adapt, and track its rigid structure on the fly. We simultaneously segment and infer poses of rigid subcomponents of a single chosen reference mesh acquired in the sequence. We show that this problem can be rigorously cast as a likelihood maximization over rigid component parameters. We solve this problem using an Expectation Maximization algorithm, with latent observation assignments to reference vertices and rigid parts. Our experiments on synthetic and real data show the validity of the method, robustness to noise, and its promising applicability to complex sequences. This work was presented at the CVPR 2011 conference .
Patient-specific 3D virtual models of anatomical organs are becoming more and more useful in medicine, for instance for diagnosis or follow-up care purposes. These models are usually created
from 2D scan or MRI images. However, small or thin geometrical features, such as ligaments, are sometimes not visible on these images. We propose to use an anatomical ontology, called
MyCorporisFabrica
http://
Aneurysms are excrescences on blood vessels. They can break, letting the blood propagate outside the vessel, which often leads to death. In some cases, the blood clots sufficiently fast so that people survive. However, a neurosurgeon or a neuroradiologist should intervene very quickly in order to repare the vessel before the aneurysm breaks once more.
The purpose of our research is to help neurosurgeons and neuroradiologists to plan surgery, by giving them quantitative information about the size, shape and geometry position of aneurysms. This work was part of the PhD of Sahar Hassan , and has presented at the International Conference on Computer Analysis of Images and Patterns (CAIP) . The method we propose first extracts a centered skeleton from the input voxel set of the vascular tree, then detects aneurysms by studying variations of vessel diameters along the skeleton. The name of an aneurysm-carrying vessel is also given thanks to a partial graph matching technique, and accurate measures to decide the treatment are provided.
This work investigates and proposes a mathematical framework to perform statistical analysis and dimensionality reduction on rotational trajectories derived from motion capture data. Motion capture data consists in a set of trajectories in the space of 3D rotations (SO(3)) and as such do not present properties of an Euclidian space. Consequently there is no easy to way to apply standard dimensionality reduction techniques on these data. Using the formalism of exponential maps and Principle Geodesics Analysis (PGA), it has been shown that it is possible to rigorously derive a dimensionality reduction analysis on such data. This reduction can be typically applied for compression of motion capture data and probabilistic implementation of the Inverse Kinematics problem. This approach has shown good properties in the context of physically-based animation with a Lagrangian formulation of rigid body dynamics coupled with geometric integrators. These integrators allow a good preservation of momentum using only first order equations, achieving both real-time and high level of realism. These works were developed through the PhD thesis of Maxime Tournier . Early development of PGA on motion capture data had been published at Eurographics in 2009. Its integration into a GPLVM framework has been published this year in the IEEE CG&A journal . Its extension into the context of physically-based animation is currently under preparation for publication.
Following a study on locomotion of quadrupeds by a team in the National Museum of Natural History (MNHN), a new theory on motion planning has been proposed. This theory, the Antero-Posterior Sequences (APS), allows a characterization of the sequence of foot placement for quadrupeds for all regular gaits with very few parameters, as well as transition between gaits, starting from stop to full gallop. In collaboration with the MNHN and the robotics department of the University of Versailles-Saint Quentin en Yvelines (UVSQ), a rigorous software implementation has been specified and developed. This software allows automatically generating foot planning of quadrupeds locomotion according to a desired speed transition. Co-workers for this project were Ludovic Maes and Anick Abourachid at the MNHN and Vincent Hugel at the UVSQ. A patent has been written and finalized for this project.
In parallel, collaboration on physical simulation of quadrupeds locomotion has been carried on with Stelian Coros (previously at University of British Columbia (UBC), now at Disney Research) and Michiel van de Panne (UBC). Automatic video analysis of dog walking, trotting and running has been used to optimize parameters of physical controllers. This work has been published at SIGGRAPH 2011 .
A three year collaboration with Technicolor has started in 2011. The objective of this collaboration is to develop new gesture interfaces. Such interfaces should go beyond the Microsoft Kinect capabilities and be able to capture and interpret complex dynamic scenes in uncontrolled environments. A PhD co-supervised has started on this topic.
3D models, coming for instance from engineering fields, are often “idealized”, or “simplified” (topologically speaking), in order to be used for simulation. The goal of this project IDEAL, funded by Grenoble INP, is to study these models, in particular the most general ones which are called “non-manifolds” and which are not handled by current softwares. We collaborate in this project with the University of Genova in Italy (Leila De Floriani).
MORPHO is aimed at designing new technologies for the measure and for the analysis of dynamic surface evolutions using visual data. Optical systems and digital cameras provide a simple and non invasive mean to observe shapes that evolve and deform and we propose to study the associated computing tools that allow for the combined analyses of shapes and motions. Typical examples include the estimation of mean shapes given a set of 3D models or the identification of abnormal deformations of a shape given its typical evolutions. Therefore this does not only include static shape models but also the way they deform with respect to typical motions. It brings a new research area on how motions relate to shapes where the relationships can be represented through various models that include traditional underlying structures, such as parametric shape models, but are not limited to them. The interest arises in several application domains where temporal surface deformations need to be captured and analyzed. It includes human body analyses but also extends to other deforming objects, sails for instance. Potential applications with human bodies are anyway numerous and important, from the identification of pathologies to the design of new prostheses. The project focus is therefore on human body shapes and their motions and on how to characterize them through new biometric models for analysis purposes. 3 academic partners will collaborate on this project: the INRIA Rhône-Alpes with the Perception team and the Evasion team, the GIPSA-lab Grenoble and the INRIA-Lorraine with the Alice team.
Website:
http://
This 3-year and half project, funded by ANR, started on January 1st, 2008. Its goal is threefold:
create a repository of 3D and 3D+t mesh models, together with ground truth segmentations (either done manually or automatically)
use human perception to enhance conception and evaluation of segmentation algorithms
develop new segmentation techniques for 3D and 3D+t meshes, using human perception and results of subjective experiments
On this project, Morpheo focuses on sequences of meshes evolving through time. Other partners are LIFL in Lille and LIRIS in Lyon.
Quaero is a program promoting research and industrial innovation on technologies for automatic analysis and classification of multimedia and multilingual documents. The partners collaborate on research and the realisation of advanced demonstrators and prototypes of innovating applications and services for access and usage of multimedia information, such as spoken language, images, video and music. The consortium is composed of French and German public and private research organisations. It is is coordinated by Technicolor. The Morpheo team is participating in the project for the development of visual gesture interfaces with the objective to ease to access to multimedia information.
The ADT (Action de Developpement Technologique) Vgate was proposed in the context of the Grimage interactive and immersive platform. The objective of Vgate is to manage the evolution of the Grimage platform both on the hardware and software sides to ensure improvements, reusability and durations of the Grimage platform perception and immersion capabilities. Vgate was proposed in collaboration with the EPI Moais from the INRIA Grenoble Rhône-Alpes.
This project is in collaboration with Vitual Plants and Galaad teams. Its objective is to develop the use of laser scanner for plant geometry reconstruction, in partnership with biologists-agronomists from several teams in France and Europe.
Program: FP7 ICT STREP
Project acronym: RE@CT
Project title: IMMERSIVE PRODUCTION AND DELIVERY OF INTERACTIVE 3D CONTENT
Duration: 12/2011 - 12/2013
Coordinator: BBC (UK)
Other partners: Fraunhofer HHI (Germany), University of Surrey (UK), Artefacto (France), OMG (UK).
Abstract: RE@CT will introduce a new production methodology to create film-quality interactive characters from 3D video capture of actor performance. Recent advances in graphics hardware have produced interactive video games with photo-realistic scenes. However, interactive characters still lack the visual appeal and subtle details of real actor performance as captured on film. In addition, existing production pipelines for authoring animated characters are highly labour intensive. RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion. The project builds on the latest advances in 3D and free-viewpoint video from the contributing project partners. For interactive applications, the technical challenges are to achieve another step change in visual quality and to transform captured 3D video data into a representation that can be used to synthesise new actions and is compatible with current gaming technology.
Program: MEDEA
Project acronym: iGlance
Duration: 09/2008 - 09/2011
Coordinator: ST Micro Electronics
Other partners: Philips research (Holland), the university of Eindhoven(Holland), 4D View solutions (France), Silicon Hive (Holland), Logica (France), Task 24 (Holland), Verum (Holland), Tima (France).
Abstract: The primary goal of the iGlance project is to define a complete end-to-end 3D image chain for both consumer 3DTV applications and healthcare applications. The project includes the study and the realization of a 3DTV receiver that will be compliant with the consumer market requirements in term of cost, time to market, and interoperability.
The secondary goal of the project is to take benefits of the received 3D data to make interactive free viewpoint selection possible in 3D TV broadcasted media. This means that the viewer can select and interactively change the viewpoint of a stereoscopic streamed video. The interactivity is enabled by broadcasting a number of video streams from several viewpoints, consisting of a traditional 2D video and additionally depth information for each frame. Any desired view location in-between is generated by free viewpoint interpolation, using the depth information. The interpolated images are then displayed on a stereoscopic screen, giving a 3D impression to the audience.
In the context of the IDEAL project (jointly with the IMAGINE Inria team), we investigate with Leila de Floriani the topological decomposition of simplicial shapes, in order to classify of non-manifold singularities.
As part of the PlantScan3D project, we work with Eric Casella on processing laser scans of trees and plants. More specifically, we try to recover global and local information about the plant from a single point cloud, without normals nor color information.
From the previous EVASION project, Franck Hetroy and Lionel Reveret were involved into the Associate Team SHARE. This project targets 3D modeling and animation of complex shapes such as animals in motion. It gathered the INRIA project EVASION and the Computer Graphics Department of UBC (University of British Columbia, Canada). Works have been done on the visual perception of 3D animation and its application in compression of 3D animation (Franck Hetroy and Ron Rensink). Another collaboration has been done on video motion capture and physical simulation of quadrupeds (Stelian Coros, Michiel van de Panne, Lionel Reveret). This last works has been published at SIGGRAPH 2011 .
This is a “Partenariat Hubert Curien” (PHC) between the Technical University of Munich, Germany and MORPHEO (2010-11). The scientific objectives of this collaboration aim at the advancement of temporal aspects of the 3D reconstruction of dynamic scenes and the human action recognition in multiple-camera systems.
Edmond Boyer:
is a member of the editorial board of the Image and Vision Computing journal.
was co-organizer of the ICCV 2011 workshop 4DMOD on 4D Modeling.
was an area chair for ICCV 2011.
was a member of the program committees of: cvpr2011, cvmp2011, PERHAPS 2011, SGA2011, RFIA2011, ORASIS2011.
has been reviewing for the journals: IEEE PAMI, IJCV, IEEE TVCG.
was reviewer of one habilitation and three PhD thesis and he was examiner of one PhD thesis.
was a member of the recruiting committees of the university Joseph Fourier and of the ENSIMAG.
Edmond Boyer gave invited talks at Imagina2011 and at TU Munich.
Jean-Sébastien Franco:
has reviewed for the following conferences : CVPR 2011, ICCV 2011, VR 2011, and the 4DMOD ICCV 2011 workshop on 4D modeling.
has reviewed for the Agence National de la Recherche CONTINT call.
has reported for the PhD thesis of Jordi Salvador, UPC Barcelona.
has given an invited talk at the Dagstuhl Seminar 11261, Outdoor and Large Scale Scene Analysis.
Franck Hétroy:
was co-organizer of Journées 2011 du Groupe de Travail en Modélisation Géométrique (GTMG).
has reviewed for the journals: IEEE TVCG, Computer and Graphics, and for the conferences: Pacific Graphics 2011, Eurographics 2012.
has also reviewed a project for the Chilean National Commission for Scientific and Technological Research (CONICYT).
Lionel Reveret:
was a member committee of EUROGRAPHICS 2012.
has reviewed for the conferences: EUROGRAPHICS 2011 and SIGGRAPH Asia 2011.
has been reviewing for the journals: IEEE TVCG, Computer Graphics Forum.
Edmond Boyer:
Master: 3D Modeling, 12h, M2R GVR, Université Joseph Fourier Grenoble.
Master: synthèse d'images, 15h, M1 informatique, Université Joseph Fourier Grenoble.
Master: projet de programmation, 15h, M1 informatique - M1 MoSig, Université Joseph Fourier Grenoble.
Master: Introduction to Image Analysis, 15h, M1 MoSig, Université Joseph Fourier Grenoble.
Jean-Sébastien Franco:
Licence: Algorithmics, 56h, Ensimag 1st year, Grenoble INP.
Licence: Introduction to Networks, 27h, Ensimag 1st year, Grenoble INP.
License: C Project, 51.5h, Ensimag 1st year, Grenoble INP.
Master: travaux d'étude et de recherche, 3h, Ensimag 2nd year, Grenoble INP.
Master: 3D Graphics, 19.5h, Ensimag 2nd year, Grenoble INP.
Master: modélisation et programmation C++, 27h, Ensimag 2nd year, Grenoble INP.
Franck Hétroy:
Master: algorithmique et programmation orientée objets, 36h, Ensimag 2nd year, Grenoble INP.
Master: modélisation et programmation C++, 27h, Ensimag 2nd year, Grenoble INP.
Master: projets de spécialité image, 28h, Ensimag 2nd year, Grenoble INP.
Master: travaux d'étude et de recherche, 3h, Ensimag 2nd year, Grenoble INP.
Master: géométrie algorithmique, 9h, Ensimag 3rd year, Grenoble INP.
Licence: algorithmique et structures de données, 36h, Ensimag 1st year, Grenoble INP.
Licence: logiciel de base, 18h, Ensimag 1st year, Grenoble INP.
Lionel Reveret:
Master: travaux étude et recherche, 3h, Ensimag 2nd year, Grenoble INP.
Master: projets image, 50h, Ensimag 2nd year, Grenoble INP.
Master: Multidimensionnal Statistical Analysis, TP, 18h, Ensimag 2nd year, Grenoble INP.
Master: 3D Animation and Maya development, 18h, Ensimag 3rd year, Grenoble INP.
Master: Computer Graphics - Animation, 12h, M2R MOSIG, Grenoble INP.
PhD : Romain Arcila, Séquences de maillages : classification et méthodes de segmentation, université Claude Bernard - Lyon 1, 25/11/2011, supervised by Florent Dupont and Franck Hétroy .
PhD : Sahar Hassan, Intégration de connaissances anatomiques a priori dans des modèles géométriques, université Joseph Fourier - Grenoble 1, 20/06/2011, supervised by Georges-Pierre Bonneau and Franck Hétroy .
PhD : Benjamin Petit, Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel, université Joseph Fourier, 21/02/2011, supervised by Edmond Boyer and Bruno Raffin .
PhD : Robin Skowronski, Perception de l'environnement, application aux drones légers, Université Bordeaux 1, 03/11/2011, supervised by Jean-Sébastien Franco and Pascal Guitton [confidential publication].
PhD : Maxime Tournier Dimension Reduction for Character Animation, Université Grenoble, 17/10/2011, supervised by Lionel Reveret.
PhD in progress : Benjamin Aupetit, Géométrie différentielle discrète et analyse spectrale de maillages spatio-temporels, 01/10/2011, supervised by Edmond Boyer and Franck Hétroy.
PhD in progress : Phuong Ho, Traitement numérique de la géométrie pour la reconstruction cohérente de séquences temporelles à partir de données stéréo multi-vues, 01/02/2011, supervised by Bruno Lévy and Franck Hétroy.
PhD in progress: Yann Savoye, Reconstruction Dynamique de la Forme et du Mouvement Humain, Université Bordeaux 1, supervised by Jean-Sébastien Franco and Pascal Guitton.
PhD in progress: Abdelaziz Djellouah, Gesture Interfaces, Technicolor-Université Grenoble, supervised by Jean-Sébastien Franco and Edmond Boyer.
PhD in progress: Estelle Duveau, Motion Measurement of Small Vertebrates, Université Grenoble, supervised by Lionel Reveret and Edmond Boyer.
PhD in progress: Simon Courtemanche, Caractérisation des Mouvements en Escalade Sportive par Mesure Vidéo, Université Grenoble, supervised by Lionel Reveret and Edmond Boyer.