Maverick is both an Inria project-team and a team of the LJK (UMR 5224), a joint research lab of CNRS, Université Joseph Fourier Grenoble-I (UJF), Université Pierre Mendès France Grenoble II (UPMF) and Institut National Polytechnique de Grenoble (INPG).
Computer-generated pictures and videos are now ubiquitous: both for leisure activities, such as special effects in motion pictures, feature movies and video games, or for more serious activities, such as visualization and simulation.
Maverick was created as a research team in January 2012 and upgraded as a research project in January 2014. We deal with image synthesis methods. We place ourselves at the end of the image production pipeline, when the pictures are generated and displayed (see figure ). We take many possible inputs: datasets, video flows, pictures and photographs, (animated) geometry from a virtual world... We produce as output pictures and videos.
These pictures will be viewed by humans, and we consider this fact as an important point of our research strategy, as it provides the benchmarks for evaluating our results: the pictures and animations produced must be able to convey the message to the viewer. The actual message depends on the specific application: data visualization, exploring virtual worlds, designing paintings and drawings... Our vision is that all these applications share common research problems: ensuring that the important features are perceived, avoiding cluttering or aliasing, efficient internal data representation, etc.
Computer Graphics, and especially Maverick is at the crossroad between fundamental research and industrial applications. We are both looking at the constraints and needs of applicative users and targeting long term research issues such as sampling and filtering.
The oject-team aims at producing representations and algorithms for efficient, high-quality computer generation of pictures and animations through the study of four Research problems:
Computer Visualization, where we take as input a large localized dataset and represent it in a way that will let an observer understand its key properties,
Expressive Rendering, where we create an artistic representation of a virtual world,
Illumination Simulation, where our focus is modelling the interaction of light with the objects in the scene.
Complex Scenes, where our focus is rendering and modelling highly complex scenes.
The heart of understanding what makes a picture useful, powerful and interesting for the user, and designing algorithms to create these pictures.
We will address these research problems through three interconnected approaches:
working on the impact of pictures, by conducting perceptual studies, measuring and removing artefacts and discontinuities, evaluating the user response to pictures and algorithms,
developing representations for data, through abstraction, stylization and simplification,
developing new methods for predicting the properties of a picture (e.g. frequency content, variations) and adapting our image-generation algorithm to these properties.
A fundamental element of the oject-team is that the research problems and the scientific approaches are all cross-connected. Research on the impact of pictures is of interest in three different research problems: Computer Visualization, Expressive rendering and Illumination Simulation. Similarly, our research on Illumination simulation will gather contributions from all three scientific approaches: impact, representations and prediction.
Our paper on “Diffusion Curves” , originally published in 2008, was featured in the “Research Highlights” section of Communications of the ACM .
Our work on using the covariance matrix for illumination simulation, in cooperation with F. Durand at MIT, have been published in ACM Transactions on Graphics .
Our work on on efficient sampling and filtering for displacement maps and texture maps have been published at Siggraph Asia and ACM Transactions on Graphics . This work was done in cooperation with University of Lyon and University of Montreal. Initial response by the community has been enthusiastic.
The Maverick project-team aims at producing representations and algorithms for efficient, high-quality computer generation of pictures and animations through the study of four research problems:
Computer Visualization where we take as input a large localized dataset and represent it in a way that will let an observer understand its key properties. Visualization can be used for data analysis, for the results of a simulation, for medical imaging data...
Expressive Rendering, where we create an artistic representation of a virtual world. Expressive rendering corresponds to the generation of drawings or paintings of a virtual scene, but also to some areas of computational photography, where the picture is simplified in specific areas to focus the attention.
Illumination Simulation, where we model the interaction of light with the objects in the scene, resulting in a photorealistic picture of the scene. Research include improving the quality and photorealism of pictures, including more complex effects such as depth-of-field or motion-blur. We are also working on accelerating the computations, both for real-time photorealistic rendering and offline, high-quality rendering.
Complex Scenes, where we generate, manage, animate and render highly complex scenes, such as natural scenes with forests, rivers and oceans, but also large datasets for visualization. We are especially interested in interactive visualization of complex scenes, with all the associated challenges in terms of processing and memory bandwidth.
The fundamental research interest of Maverick is first, understanding what makes a picture useful, powerful and interesting for the user, and second designing algorithms to create and improve these pictures.
We will address these research problems through three interconnected research approaches:
Our first research axis deals with the impact pictures have on the viewer, and how we can improve this impact. Our research here will target:
evaluating user response: we need to evaluate how the viewers respond to the pictures and animations generated by our algorithms, through user studies, either asking the viewer about what he perceives in a picture or measuring how his body reacts (eye tracking, position tracking).
removing artefacts and discontinuities: temporal and spatial discontinuities perturb viewer attention, distracting the viewer from the main message. These discontinuities occur during the picture creation process; finding and removing them is a difficult process.
The data we receive as input for picture generation is often unsuitable for interactive high-quality rendering: too many details, no spatial organisation... Similarly the pictures we produce or get as input for other algorithms can contain superfluous details.
One of our goals is to develop new data representations, adapted to our requirements for rendering. This includes fast access to the relevant information, but also access to the specific hierarchical level of information needed: we want to organize the data in hierarchical levels, pre-filter it so that sampling at a given level also gives information about the underlying levels. Our research for this axis include filtering, data abstraction, simplification and stylization.
The input data can be of any kind: geometric data, such as the model of an object, scientific data before visualization, pictures and photographs. It can be time-dependent or not; time-dependent data bring an additional level of challenge on the algorithm for fast updates.
Our algorithms for generating pictures require computations: sampling, integration, simulation... These computations can be optimized if we already know the characteristics of the final picture. Our recent research has shown that it is possible to predict the local characteristics of a picture by studying the phenomena involved: the local complexity, the spatial variations, their direction...
Our goal is to develop new techniques for predicting the properties of a picture, and to adapt our image-generation algorithms to these properties, for example by sampling less in areas of low variation.
Our research problems and approaches are all cross-connected. Research on the impact of pictures is of interest in three different research problems: Computer Visualization, Expressive rendering and Illumination Simulation. Similarly, our research on Illumination simulation will use all three research approaches: impact, representations and prediction.
Beyond the connections between our problems and research approaches, we are interested in several issues, which are present throughout all our research:
is an ubiquitous process occurring in all our application domains, whether photorealistic rendering (e.g. photon mapping), expressive rendering (e.g. brush strokes), texturing, fluid simulation (Lagrangian methods), etc. When sampling and reconstructing a signal for picture generation, we have to ensure both coherence and homogeneity. By coherence, we mean not introducing spatial or temporal discontinuities in the reconstructed signal. By homogeneity, we mean that samples should be placed regularly in space and time. For a time-dependent signal, these requirements are conflicting with each other, opening new areas of research.
is another ubiquitous process, occuring in all our application domains, whether in realistic rendering (e.g. for integrating height fields, normals, material properties), expressive rendering (e.g. for simplifying strokes), textures (through non-linearity and discontinuities). It is especially relevant when we are replacing a signal or data with a lower resolution (for hierarchical representation); this involves filtering the data with a reconstruction kernel, representing the transition between levels.
are also a common requirement for all our applications. We want our algorithms to be usable, which implies that they can be used on large and complex scenes, placing a great importance on scalability. For some applications, we target interactive and real-time applications, with an update frequency between 10 Hz and 120 Hz.
in space and time is also a common requirement of realistic as well as expressive models which must be ensured despite contradictory requirements. We want to avoid flickering and aliasing.
our input data is likely to be time-varying (e.g. animated geometry, physical simulation, time-dependent dataset). A common requirement for all our algorithms and data representation is that they must be compatible with animated data (fast updates for data structures, low latency algorithms...).
Our research is guided by several methodological principles:
to find solutions and phenomenological models, we use experimentation, performing statistical measurements of how a system behaves. We then extract a model from the experimental data.
for each algorithm we develop, we look for experimental validation: measuring the behavior of the algorithm, how it scales, how it improves over the state-of-the-art... We also compare our algorithms to the exact solution. Validation is harder for some of our research domains, but it remains a key principle for us.
the equations describing certain behaviors in image synthesis can have a large degree of complexity, precluding computations, especially in real time. This is true for physical simulation of fluids, tree growth, illumination simulation... We are looking for emerging phenomena and phenomenological models to describe them (see framed box “Emerging phenomena”). Using these, we simplify the theoretical models in a controlled way, to improve user interaction and accelerate the computations.
Computer Graphics is, by nature, at the interface of many research domains: physics for the behavior of light, applied mathematics for numerical simulation, biology, algorithmics... We import tools from all these domains, and keep looking for new tools and ideas.
In situations where specific tools are required for a problem, we will proceed from a theoretical framework to develop them. These tools may in return have applications in other domains, and we are ready to disseminate them.
we have a long experiment of collaboration with industrial partners. These collaborations bring us new problems to solve, with short-term or medium-term transfert opportunities. When we cooperate with these partners, we have to find what they need, which can be very different from what they want, their expressed need.
Maverick is part of the research theme “Interaction and Visualization” at Inria. This research theme has historically been very successful inside Inria. It nicely connects industrial applications with fundamental research using advanced mathematics, algorithmic and computer science, and it connects computer science with other sciences such as physics, biology, medicine, environment, psychophysiology.
We envision Maverick at this crossroad. We have several industrial partnerships, with companies making video games (Eden Games), special effects for motion pictures (WetaFX), planetarium (RSA Cosmos), graphical edition software (Adobe), tomography (Digisens) or visualizing simulated data (EDF). The constraints and needs of our partners motivate new problems for us to solve. At the same time, we are looking into fundamental research problems, such as analysis of light transport, human perception, filtering and sampling.
The fundamental research problems we target are not necessarily “long term research”: the computer graphics industry is very dynamic and can adopt (and adapt) a research paper in a matter of months if it sees benefits in it. The research problems we describe as “fundamental” correspond to high-risk, high-benefit research problems. Solving these problems would result in a significant breakthrough for the whole domain of Computer Graphics, both in research and in industry.
Although it has long been recognized that the visual channel is one of the most effective means for communicating information, the use of computer processing to generate effective visual content has been mostly limited to very specific image types: realistic rendering, computer-aided cell animation, etc.
The ever-increasing complexity of available 3d models is creating a demand for improved image creation techniques for general illustration purposes. Recent examples in the literature include computer systems to generate road maps, or assembly instructions, where a simplified visual representation is a necessity.
Our work in expressive rendering and in relevance-guided rendering aims at providing effective tools for all illustration needs that work from complex 3d models. We also plan to apply our knowledge of lighting simulation, together with expressive rendering techniques, to the difficult problem of sketching illustrations for architectural applications.
Video games represent a particularly challenging domain of application since they require both real-time interaction and high levels of visual quality. Moreover, video games are developed on a variety of platforms with completely different capacities. Automatic generation of appropriate data structures and runtime selection of optimal rendering algorithms can save companies a huge amount of development.
More generally, interactive visualization of complex data (e.g. in scientific engineering) can be achieved only by combining various rendering accelerations (e.g. visibility culling, levels of details, etc.), an optimization task that is hard to perform “by hand” and highly data dependent. One of Maverick’ goals is to understand this dependence and automate the optimization.
Virtual heritage is a recent area which has seen spectacular growth over the past few years. Archeology and heritage exhibits are natural application areas for virtual environments and computer graphics, since they provide the ability to navigate 3D models of environments that no longer exist and can not be recorded on a videotape. Moreover, digital models and 3D renderings give the ability to enrich the navigation with annotations.
Our work on style has proved very interesting to architects who have a long habit of using hand-drawn schemas and wooden models to work and communicate. Wooden models can advantageously be replaced by 3D models inside a computer. Drawing, on the other hand, offers a higher level of interpretation and a richness of expression that are really needed by architects, for example to emphasize that such model is an hypothesis.
By investigating style analysis and expressive rendering, we could “sample” drawing styles used by architects and “apply” them to the rendering of 3D models. The computational power made available by computer assisted drawing can also lead to the development of new styles with a desired expressiveness, which would be harder to produce by hand. In particular, this approach offers the ability to navigate a 3D model while offering an expressive rendering style, raising fundamental questions on how to “animate” a style.
Maverick insists on sharing the software that is developed for
internal use. These are all listed in a dedicated section on
the web site http://
Gratin is a node-based compositing software for creating, manipulating and animating 2D and 3D data. It uses an internal direct acyclic multi-graph and provides an intuitive user interface that allows to quickly design complex prototypes. Gratin has several properties that make it useful for researchers and students. (1) it works in real-time: everything is executed on the GPU, using OpenGL, GLSL and/or Cuda. (2) it is easily programmable: users can directly write GLSL scripts inside the interface, or create new C++ plugins that will be loaded as new nodes in the software. (3) all the parameters can be animated using keyframe curves to generate videos and demos. (4) the system allows to easily exchange nodes, group of nodes or full pipelines between people. In a research context, Gratin aims at facilitating the creation of prototypes, testing ideas and exchanging data. For students, Gratin can be used to show real-time demos/videos, or help learning how to program with the GPU. Gratin has already been used for creating new computer graphics tools but also for designing perceptual experiments. Most of the work published by R. Vergne was done with Gratin.
PlantRad is a software program for computing solutions to the equation of light equilibrium in a complex scene including vegetation. The technology used is hierarchical radiosity with clustering and instantiation. Thanks to the latter, PlantRad is capable of treating scenes with a very high geometric complexity (up to millions of polygons) such as plants or any kind of vegetation scene where a high degree of approximate self-similarity permits a significant gain in memory requirements. Its main domains of applications are urban simulation, remote sensing simulation (See the collaboration with Noveltis, Toulouse) and plant growth simulation, as previously demonstrated during our collaboration with the LIAMA, Beijing.
In the context of the European project RealReflect, the Maverick team has
developed the HQR software based on the photon mapping method which is
capable of solving the light balance equation and of giving a high quality
solution. Through a graphical user interface, it reads X3D scenes using the
X3DToolKit package developed at Maverick, it allows the user to tune several
parameters, computes photon maps, and reconstructs information to obtain a
high quality solution. HQR also accepts plugins which considerably eases the
developpement of new algorithms for global illumination, those benefiting
from the existing algorithms for handling materials, geometry and light
sources. HQR is freely available for
download at http://
.
The MobiNet software allows for the creation of simple applications
such as video games, virtual physics experiments or pedagogical math
illustrations.
It relies on an intuitive graphical interface and language which
allows the user to program a set of mobile objects (possibly through
a network).
It is available in public domain at http://
The main aim of MobiNet is to allow young students at high school level with no programming skills to experiment, with the notions they learn in math and physics, by modeling and simulating simple practical problems, and even simple video games. This platform has been massively used during the Grenoble INP "engineer weeks" since 2002: 150 senior high school pupils per year, doing a 3 hour practice. This work is partly funded by Grenoble INP. Various contacts are currently developed in the educational world. Besides "engineer weeks", several groups of "monitors" PhD students conducts experimentations based on MobiNet with a high scool class in the frame of the courses. Moreover, presentation in workshops and institutes are done, and a web site repository is maintained. A web version is currently under preliminary developpement.
Freestyle is a software for Non-Photorealistic Line Drawing rendering from 3D scenes (Figure ). It is designed as a programmable interface to allow maximum control over the style of the final drawing: the user "programs" how the silhouettes and other feature lines from the 3D model should be turned into stylized strokes using a set of programmable operators dedicated to style description. This programmable approach, inspired by the shading languages available in photorealistic renderers such as Pixar's RenderMan, overcomes the limitations of integrated software with access to a limited number of parameters and permits the design of an infinite variety of rich and complex styles. The system currently focuses on pure line drawing as a first step. The style description language is Python augmented with our set of operators. Freestyle was developed in the framework of a research project dedicated to the study of stylized line drawing rendering from 3D scenes. This research has lead to two publications , .
In 2008, Freestyle get a new life, completely outside Maverick or Inria:
it was the basis of one of the 6 Google Summer of Code projects awarded
to the Blender Foundation
.
We provide an implementation of the vector drawing tool described
in our Diffusion Curves Siggraph paper (Figure ).
This prototype is composed of the Windows binary, along with the required
shader programs (ie. in source code).
The software is available for
download at http://
.
The VRender library is a simple tool to render the content of an OpenGL window to a vectorial device such as Postscript, XFig, and soon SVG. The main usage of such a library is to make clean vectorial drawings for publications, books, etc.
In practice, VRender replaces the z-buffer based hidden surface removal of OpenGL by sorting the geometric primitives so that they can be rendered in a back-to-front order, possibly cutting them into pieces to solve cycles.
VRender is also responsible for the vectorial snapshot feature of the
QGLViewer library. VRender is released under the LGPL licence and
is freely available for download at http://
.
Now available at http://
Proland (for procedural landscape) is a software platform originally developed at the Evasion team-project by Eric Bruneton, and currently funded by the ANR-JCJC SimOne. The goal of this platform is the real-time quality rendering and editing of large landscapes. All features can work with planet-sized terrains, for all viewpoints from ground to space. Most of the work published by Eric Bruneton and Fabrice Neyret has been done within Proland, and a large part has been integrated in the main branch. Several licences have been transfered to companies. Eric Bruneton was hired by Google-Zürich in september 2011, but will be able to keep some participation in the project.
Soon available at http://
Gigavoxel is a software platform initiated from the PhD work of Cyril Crassin, and currently funded by the ANR CONTINT RTIGE (Figure ). The goal of this platform is the real-time quality rendering of very large and very detailed scenes which couldn't fit memory. Performances permit showing details over deep zooms and walk through very crowdy scenes (which are rigid, for the moment). The principle is to represent data on the GPU as a Sparse Voxel Octree which multiscale voxels bricks are produced on demand only when necessary and only at the required resolution, and kept in a LRU cache. User defined producer lays accross CPU and GPU and can load, transform, or procedurally create the data. Another user defined function is called to shade each voxel according to the user-defined voxel content, so that it is user choice to distribute the appearance-making at creation (for faster rendering) or on the fly (for storageless thin procedural details). The efficient rendering is done using a GPU differential cone-tracing using the scale corresponding to the 3D-MIPmapping LOD, allowing quality rendering with one single ray per pixel. Data is produced in case of cache miss, and thus only whenever visible (accounting for view frustum and occlusion). Soft-shadows and depth-of-field is easily obtained using larger cones, and are indeed cheaper than unblurred rendering. Beside the representation, data management and base rendering algorithm themself, we also worked on realtime light transport, and on quality prefiltering of complex data. Ongoing researches are addressing animation. GigaVoxels is currently used for the quality real-time exploration of the detailed galaxy in ANR RTIGE. This work led to several publications and several licences have been sold to companies.
Recent work has shown that the perception of 3D shapes, material properties and illumination are inter-dependent, although for practical reasons, each set of experiments has probed these three causal factors independently. Most of these studies share a common observation though: that variations in image intensity (both their magnitude and direction) play a central role in estimating the physical properties of objects and illumination. Our aim is to separate retinal image intensity gradients into contributions of different shape and material properties, through a theoretical analysis of image formation. We find that gradients can be understood as the sum of three terms: variations of surface depth conveyed through surface-varying reflectance and near-field illumination effects (shadows and inter-reflections); variations of surface orientation conveyed through reflections and far-field lighting effects; and variations of surface micro-structures conveyed through anisotropic reflections. We believe our image gradient decomposition constitutes a solid and novel basis for perceptual inquiry. We first illustrate each of these terms with synthetic 3D scenes rendered with global illumination. We then show that it is possible to mimic the visual appearance of shading and reflections directly in the image, by distorting patterns in 2D. Finally, we discuss the consistency of our mathematical relations with observations drawn by recent perceptual experiments, including the perception of shape from specular reflections and texture. In particular, we show that the analysis can correctly predict certain specific illusions of both shape and material.
Shading depends on different interactions between surface geometry and lighting. Under collimated illumination, shading is dominated by the 'direct' term, in which image intensities vary with the angle between surface normals and light sources. Diffuse illumination, by contrast, is dominated by 'vignetting effects' in which image intensities vary with the degree of self-occlusion (the proportion of incoming direction that each surface point 'sees'). These two types of shading thus lead to very different intensity patterns, which raises the question of whether shading inferences are based directly on image intensities. We show here that the visual system uses 2D orientation signals ('orientation fields') to estimate shape, rather than raw image intensities and an estimate of the illuminant. We rendered objects under varying illumination directions designed to maximize the effects of illumination on the image. We then passed these images through monotonic, non-linear intensity transfer functions to decouple luminance information from orientation information, thereby placing the two signals in conflict (Figure ). In Task 1 subjects adjusted the 3D shape of match objects to report the illusory effects of changes of illumination direction on perceived shape. In Task 2 subjects reported which of a pair of points on the surface appeared nearer in depth. They also reported perceived illumination directions for all stimuli. We find that the substantial misperceptions of shape are well predicted by orientation fields, and poorly predicted by luminance-based shape from shading. For the untransformed images illumination could be estimated accurately, but not for the transformed images. Thus shape perception was, for these examples, independent of the ability to estimate the lighting. Together these findings support neurophysiological estimates of shape from the responses of orientation selective cell populations, irrespective of the illumination conditions.
We study the use of Depth of Field for depth perception in Direct Volume Rendering (Figure ). Direct Volume Rendering with Phong shading and perspective projection is used as the baseline. Depth of Field is then added to see its impact on the correct perception of ordinal depth. Accuracy and response time are used as the metrics to evaluate the usefulness of Depth of Field. The on site user study has two parts: static and dynamic. Eye tracking is used to monitor the gaze of the subjects. From our results we see that though Depth of Field does not act as a proper depth cue in all conditions, it can be used to reinforce the perception of which feature is in front of the other. The best results (high accuracy & fast response time) for correct perception of ordinal depth is when the front feature (out of the users were to choose from) is in focus and perspective projection is used. Our work has been published in the proceedings of the Pacific Graphics conference in 2013 .
Preserving meaningful local extrema of scalar data in a visualization while removing nearby extrema with similar values is a powerful way for enhancing the appearance of significant features. For the special case of monotonic data, e.g. data with no local extrema in the interior of the domain, the visualization should not introduce spurious local extrema. We study a new piecewise polynomial interpolant that preserves the monotonicity of scalar data defined on a 2D uniform grid. Based on this interpolant, we also plan to introduce a new method for visualizing data that has been simplified according to its Morse-Smale complex, a combinatorial structure connecting the critical points and partitioning the domain into a set of monotonic regions. In contrast with previous analogous works, our approach uses piecewise polynomial functions defined in each monotonic region instead of optimizing values on the original mesh vertices. We have presented our first results in a workshop and have submitted a paper for a book chapter about our new monotonic interpolant.
The preparation of CAD models from complex assemblies for simulation purposes is a very time-consuming and tedious process, since many tasks such as meshing and idealization are still completed manually. Herein, the detection and extraction of geometric interfaces between components of the assembly is of central importance not only for the simulation objectives but also for all necessary shape transformations such as idealizations or detail removals. It is a repetitive task in particular when complex assemblies have to be dealt with. This paper proposes a method to rapidly and fully automatically generate a precise geometric description of interfaces in generic B-Rep CAD models. The approach combines an efficient GPU ray-casting technique commonly used in computer graphics with a graph-based curve extraction algorithm. Not only is it able to detect a large number of interfaces efficiently, but it also provides an accurate Nurbs geometry of the interfaces, that can be stored in a plain STEP file for further downstream treatment. We demonstrate our approach on examples from aeronautics and automotive industry, see Figure . Our results have been funded in by the ANR Project ROMMA. They have been published as a journal paper in , and presented at the Solid and Physical Modeling conference in 2013.
Selections are central to image editing, since they are the starting point of common operations such as copy-pasting and local edits. Creating them by hand is particularly tedious and scribble-based techniques have been introduced to assist the process. By interpolating a few strokes specified by users, these methods generate precise selections. However, most of the algorithms assume a 100% accurate input, and even small inaccuracies in the scribbles often degrade the selection quality, which imposes an additional burden on users. In this work, we propose a selection technique tolerant to input inaccuracies (See example on Figure ). We use a dense conditional random field (CRF) to robustly infer a selection from possibly inaccurate input. Further, we show that patch-based pixel similarity functions yield more precise selection than simple point-wise metrics. However, efficiently solving a dense CRF is only possible in low-dimensional Euclidean spaces, and the metrics that we use are high-dimensional and often non-Euclidean. We address this challenge by embedding pixels in a low-dimensional Euclidean space with a metric that approximates the desired similarity function. The results show that our approach performs better than previous techniques and that two options are sufficient to cover a variety of images depending on whether the objects are textured. This work has been published to the Eurographics Conference .
Many rendering methods use discrete textures (planar arrangements of vector elements) instead of classic bitmaps. Discrete textures are resolution-insensitive and easily allow to modify the elements' geometry or spatial distribution. However, manually drawing such textures is a time-consuming task. Automating this production is a long-time studied subject. The methods designed for this purpose deal with a difficult tradeoff between the reachable variety of textures and the usability for a community of users. In this work, we show that considering discrete textures as programs allow for a larger variety of textures than relying on a given model. This work has been as a Siggraph 2013 talk .
Last year work and HPG'12 paper "Representing Appearance and Pre-filtering Subpixel Data in Sparse Voxel Octrees" was dealing with the light and view dependant aspect of complex surfaces due to sub-pixels details. This was done by replacing sub-voxel height fields by gaussian slope distribution and height-correlated colors by its gradient, feeding a Cook-Torrance-like microfacet brdf.
In continuation of this and in the same spirit of replacing sub-pixel values by gaussian distributions to be shaded using the frame of microfacets brdf, this year we addressed the filtering of color maps (on surfaces and per se), displacement map, and reflectance maps, thus obtaining a complete model of the local rendering integral (see Figure ).
Note that Eric did his work partly during his 6-monthes stay of University of Montreal in the scope of Exploradoc regional founding. He also colaborated with nVIDIA for an on-going work related to animation of GigaVoxels, and we were invited for a stay of several weeks at Weta Digital, NZ to help them applying our techniques.
Indeed, several ubiquitus CG operations like filtering non-linear functions of the data are still mostly unsolved despite their important flaws. Typically, density, noise data, normals or height are filtered before feeding a color look-up texture, despite the strong non-linearity of the transform forbids factoring it out of the integral. This result on very visible flaws such as thin blue bones+air foams appearing as red muscle at distance in volume visualisation, silhouettes and horizon getting the middle tint instead of the integral of tints, procedural noise bump maps and height fields appearing as smooth instead of rough.
Assuming Gaussian distribution of colors within a pixel or voxel, the filtered colors values can be represented as color lobes (i.e. histogramms) instead of scalars. In all the cases where the subpixel/voxel raw data can also be represented as gaussian distribution (e.g. Perlin noise), the filtering is just the inner product of the two lobes. It can easily be tabuled as a 1D LUT MIP-map which LOD corresponds to the standard deviation and thus the scale. Since microfacet brdf models allow to estimate the visible slope statistics accounting for light and view visibility, this allows for emerging light-view dependant color effect both acurately and very efficiently. Note that the same scheme applies for colors corelated to orientation rather than heights (see Figure ). This provides a multiscale representation where subpixel/subvoxel data is represented through lobes which can be precalculated or calculated on demand from the thiner level.
This work was published at ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2013 . An extended version "Filtering Non-Linear Transfer Functions on Surfaces" was published at IEEE Transactions on Visualization and Computer Graphics 2013 .
Here, the last term of the local rendering integration is addressed: the filtering of subpixel/subvoxel geometry and brdf as an appearant brdf applied on a macro-geometry. By re-derivating accurately the brdf of a displacement map assumed to have sub-pixel gaussian distribution (with an exact masking term, more accurate cross-correlated light-view, and offseted appearant lobe), and by noting that the reflectance of the environment can be pre-filtered like the textures of the previous paper, we finally obtain a complete model of pre-fitltered appearance of surfaces (see Figure ). This work, co-first-authored with Jonathan Dupuy, was published at ACM Transactions on Graphics and presented at Siggraph Asia .
Shading acquired materials with high-frequency illumination is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required may vary across the image, and the image itself may have high- and low-frequency variations, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this work, we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. In each frame, we first estimate the frequencies in the local light field arriving at each pixel, as well as the variance of the shading integrand. Our frequency analysis accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this frequency information (bandwidth and variance) to adaptively sample for reconstruction and integration. For example, fewer pixels per unit area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects (See Figure ). This work has been published in IEEE Transactions on Visualization and Computer Graphics , as a follow up of a previous paper published at the I3D conference.
The rendering of effects such as motion blur and depth-of-field requires costly 5D integrals. We dramatically accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal light-field, and show that first-order motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the co- variance matrix and Gaussian approximations. We derive update equations for the 5 × 5 covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for special-case formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the single-bounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depth-of-field and motion blur effects (See Figure ). This work was published at ACM Transactions on Graphics and presented at Siggraph'2013.
Efficient filtering remains an important challenge in computer graphics, particularly when filters are spatially-varying, have large extent, and/or exhibit complex anisotropic profiles. We explored an efficient filtering approach for these difficult cases based on anisotropic filter decomposition (IFD). By decomposing complex filters into linear combinations of simpler, displaced isotropic kernels, and precomputing a compact prefiltered dataset, we are able to interactively apply any number of—potentially transformed—filters to a signal (See Figure ). Our performance scales linearly with the size of the decomposition, not the size nor the dimensionality of the filter, and our prefiltered data requires reasonnable storage, comparing favorably to the state-of-the-art. We apply IFD to interesting problems in image processing and realistic rendering. This work is currently under submission and a technical report is already available .
Some materials, such as coffee, milk or marble, have a soft translucent aspect because of sub-surface scattering: light enters them, is scattered several times inside before leaving in a different place. A full representation of sub-surface scattering effects in illumination simulation is computationally expensive. The main difficult comes from multiple scattering events: the high number of events increases the uncertainty on the result, forcing us to allocate more time for the computations. Recently, we showed that there is a strong correlation between the surface effects of multiple scattering inside the material and the effects after just two scatter events. This knowledge will help in accelerating multiple scattering effects (see figure . We exploited this knowledge to provide a model and implementation for fast computation of double-scattering events, using a precomputed density function stored in a compact way. This work has been published in IEEE Computer Graphics and Applications .
BRDF acquisition is a tedious operation, since it requires measuring 4D data. On one side of the spectrum lie explicit methods, which perform many measurements to potentially produce very accurate reflectance data after interpolation. These methods are generic but practically difficult to setup and produce high volume data. On the other side, acquisition methods based on parametric models implicitly reduce the infinite dimensionality of the BRDF space to the number of parameters, allowing acquisition with few samples. However, parametric methods require non linear optimization. They become unstable when the number of parameters is large, with no guaranty that a given parametric model can ever fit particular measurements.
We experiment a new acquisition method where the measurement of the BRDF is performed from a single image, knowing the normals and illumination. To tackle such a severely underconstrained problem, we express the BRDF in a high dimensional basis, and perform the reconstruction using compressive sensing, looking for the most sparse solution to the linear problem of fitting the measurement image. Doing so, we leverage the coherency between the measured pixels, while keeping the high dimension of the space the BRDF is searched into.
This work is a very first attempt at reconstructing BRDFs using compressive sensing. In Fig we used a synthetic input image, for the sake of checking the feasibility of the recovery algorithm, in the particular case of an isotropic spatially constant BRDFs. The possibility to extend our theory to non spatially varying and anisotropic BRDFs is currently under investigation. We would like to orient our work toward BRDF acquisition with consumer hardware. In particular, our preliminary results indicate that compressive sensing could achieve an very accurate acquisition with additional input, such as a video of a static object under probed lighting.
This word has been published as a poster to the Siggraph Asia'2013 conference .
Curves are widely used in computer science to describe real-life objects such as slender deformable structures. Using only 3 parameters per element, piecewise helices offer an interesting and compact way of representing digital curves. In our work , we present a robust and fast algorithm to approximate Bezier curves with G1 piecewise helices. Our approximation algorithm takes a Bezier spline as input along with an integer N and returns a piecewise helix with N elements that closely approximates the input curve. The key idea of our method is to take N+1 evenly distributed points along the curve, together with their tangents, and interpolate these tangents with helices by slightly relaxing the points. Building on previous work, we generalize the proof for Ghosh's co-helicity condition, which serves us to guarantee the correctness of our algorithm in the general case. Finally, we demonstrate both the efficiency and robustness of our method by successfully applying it to various datasets of increasing complexity, ranging from synthetic curves created by an artist to automatic image-based reconstructions of real data such as hair, heart muscular fibers or magnetic field lines of a star.
In the latest years, considerable progress has been achieved for accurately acquiring the geometry of human hair, thus largely improving the realism of virtual characters. In parallel, rich physics-based simulators have been successfully designed to capture the intricate dynamics of hair due to contact and friction. However, at the moment there exists no consistent pipeline for converting a given hair geometry into a realistic physics-based hair model. Current approaches simply initialize the hair simulator with the input geometry in the absence of external forces. This results in an undesired sagging effect when the dynamic simulation is started, which basically ruins all the efforts put into the accurate design and/or capture of the input hairstyle. In this work we propose the first method which consistently and robustly accounts for surrounding forces — gravity and frictional contacts, including hair self-contacts — when converting a geometric hairstyle into a physics-based hair model. Taking an arbitrary hair geometry as input together with a corresponding body mesh, we interpret the hair shape as a static equilibrium configuration of a hair simulator, in the presence of gravity as well as hair-body and hair-hair frictional contacts. Assuming that hair parameters are homogeneous and lie in a plausible range of physical values, we show that this large underdetermined inverse problem can be formulated as a well-posed constrained optimization problem, which can be solved robustly and efficiently by leveraging the frictional contact solver of the direct hair simulator. Our method was successfully applied to the animation of various hair geometries, ranging from synthetic hairstyles manually designed by an artist to the most recent human hair data automatically reconstructed from capture.
We are funded by the ANR research program "Blanc" for a joint research project with two other Inria research teams, REVES in Sophia-Antipolis and iPARLA in Bordeaux. The goal of this project is studying light transport operators for global illumination, both in terms of frequency analysis and dimensional analysis. The grant started in October 2011, for 48 months.
RTIGE stands for Real-Time and Interactive Galaxy for Edutainment. This is an ANR CONTINT (Contents and Interactions) research program, for a joint research project with the EVASION Inria project-team, the GEPI and LERMA research teams at Paris Observatory, and the RSA Cosmos company. The goal of this project is to simulate the quality multi-spectral real-time exploration of the Galaxy with Hubble-like images, based on simulation data, statistical data coming from observation, star catalogs, and procedural amplification for stars and dust clouds distributions. RSA-Cosmos aims at integrating the results in digital planetariums. The grant started in December 2010, for 48 months.
The ANR project ROMMA has been accepted in 2009. It started in january 2010 for a duration of 4 years. The partners of this project are academic and industry experts in mechanical engineering, numerical simulation, geometric modeling and computer graphics. The aim of the project is to efficiently and robustly model very complex mechanical assemblies. We work on the interactive computation of contacts between mechanical parts using GPU techniques. We also investigate the Visualization of data with uncertainty, applied in the context of the project.
The MAPSTYLE project aims at exploring the possibilities offered by cartography and expressive rendering to propose original and new cartographic representations. Through this project, we target two types of needs. On the one hand, mapping agencies produce series paper maps with some renderings that are still derived from drawings made by hand 50 years ago: for example, rocky areas in the series TOP25 (to 1/25000) of the French Institut Géographique National (IGN). The rendering of these rocky areas must be automated and its effectiveness retained to meet the requirements of hikers safety. On the other hand, Internet mapping tools allow any user to become a cartographer. However, they provide default styles that cannot be changed (GeoPortal, Google Maps) or they are editable but without any assistance or expertise (CloudMade). In such cases, as in the case of mobile applications, we identify the need to offer users means to design map styles more personalised and more attractive to meet their expectations (decision-making, recreation, etc.) and their tastes. The grant started on October 2012, for 48 months.
We have a continuing collaboration with Professor Kavita Bala, from Cornell University, USA, on the subject of global illumination and simulation of light scattering in participating media. Our work has been accepted at ACM transaction on graphics in 2014.
We currently have a very fruitful collaboration with Derek Nowrouzhezarai, from University of Montreal, Canada, dealing with isotropic filter decomposition in the spherical domain, based on zonal harmonic basis.
Fabrice Neyret is visiting WETA Digital (New-Zeland) since November 23, 2013. Eric Heitz visited WETA Digital (New-Zeland) from November 23, 2013 to December 12, 2013..
Romain Vergne is member of the jury for the best paper at AFIG 2013.
Romain Vergne is member of the "Commission d’auto-évaluation" of the LJK.
Romain Vergne is member of EGSR 2013 program committee.
Cyril Soler is member of Eurographics 2013 program committee.
Cyril Soler is member of Pacific Graphics 2013 program committee.
Cyril Soler is member of the Eurographics Symposium on Rendering 2013 program committee.
Joelle Thollot is member of Expressive 2013 program committee.
Nicolas Holzschuch was co-chair of the Eurographics Symposium on Rendering 2013, with Symon Rusinkiewicz from Princeton University.
Nicolas Holzschuch was co-chair of the Tutorials track of Eurographics 2014, with Karol Myszkowski from the MPI Informatik.
Nicolas Holzschuch is a member of Inria "Evaluation Comittee", since 2008.
Nicolas Holzschuch is a member of the ACM Symposium on Interactive 3D Graphics 2014 program committee.
Georges-Pierre Bonneau is industrial co-chair of Eurographics 2014, with Xin Tong from Microsoft Research Asia
Georges-Pierre Bonneau is member of the international program committee of Solid and Physical Modelling 2013
Georges-Pierre Bonneau is member of the international program committee of EnvirVis'2013 and 2014
Joëlle Thollot and Georges-Pierre Bonneau are both full Professor of Computer Science. Romain Vergne is an associate professor in Computer Science. They teach general computer science topics at basic and intermediate levels, and advanced courses in computer graphics and visualization at the master levels. Nicolas Holzschuch teaches advanced courses in computer graphics at the Master level.
PhD: Alexandre Derouet-Jourdan, Courbes dynamiques : de la capture de formes géométriques à l'animation, UJF, November 7 2013, Florence Bertails-Descoubes, Joëlle Thollot.
PhD in progress : Eric Heitz, Représentations alternatives pour le traitement haute qualité efficace des scènes complexes, October 2010, Fabrice Neyret.
PhD in progress : Léo Allemand-Giorgis, Visualisation de champs scalaires guidée par la topologie, October 2012, Georges-Pierre Bonneau, Stefanie Hahmann.
PhD in progress : Aarohi Johal, algorithmes de génération automatique d'arbres de construction à partir de modèles géométriques CAO B-Rep, September 2013, Jean-Claude Léon, Georges-Pierre Bonneau, thèse CIFRE EdR R&D.
PhD in progress : Benoît Zupancic, acquisition of reflectance properties using compressive sensing, October 2012, Nicolas Holzschuch, Cyril Soler.
PhD in progress : Hugo Loi, Automatisation de la génération de textures vectorielles et application à la cartographie, October 2012, Joëlle Thollot, Thomas Hurtut.
PhD in progress : Benoit Arbelot, Etudes statistiques de forme, de matériaux et d’environnement pour la manipulation de l’apparence, October 2013, Joëlle Thollot, Romain Vergne.
Georges-Pierre Bonneau has been reviewer and member of the jury for the "Habilitation à Diriger des Recherches" of Guillaume Lavoué at the University of Lyon (04/04/13) and Frédéric Cordier at the University of "Haute-Alsace" (29/11/13).
Georges-Pierre Bonneau has been member of the jury for the PhD of Adrien Bernhardt (Grenoble, 03/07/13), Jeremy Espinas (Lyon, 24/10/13), Lionel Untereiner (Strasbourg, 08/11/13) and Cédric Cordier (Grenoble, 06/12/13).
Nicolas Holzschuch has been a member of the jury og the "Habilitation à Diriger des Recherches" of Lilian Aveneau at the University of Poitiers (12/12/2013).