ARTIS is both an INRIA project-team and a subset of the LJK (UMR 5224), a joint research lab of CNRS, Université Joseph Fourier Grenoble-I (UJF), Université Pierre Mendès France Grenoble II (UPMF) and Institut National Polytechnique de Grenoble (INPG).
ARTIS was created in January, 2003, based on the observation that current image synthesis methods appear to provide limited solutions for the variety of current applications. The classical approach to image synthesis consists of separately modeling a 3D geometry and a set of photometric properties (reflectance, lighting conditions), and then computing their interaction to produce a picture. This approach severely limits the ability to adapt to particular constraints or freedoms allowed in each application (such as precision, real-time, interactivity, uncertainty about input data...). Furthermore, it restricts the classes of possible images and does not easily lend itself to new uses such as illustration, where a form of hierarchy of image constituents must be constructed.
One of the goals of the project is the definition of a more generic framework for the creation of synthetic images, integrating elements of 3D geometry, of 2D geometry (built from 3D geometry), of appearance (photometry, textures...), of rendering style, and of importance or relevance for a given application. The ARTIS project-team therefore deals with multiple aspects of image synthesis: model creation from various sources of data, transformations between these models, rendering and imaging algorithms, and the adaptation of the models and algorithms to various constraints or application contexts. The main research directions in ARTIS address:
Analysis and simulation of lighting effects. Development of hierarchical simulation techniques integrating the most general and realistic effects, fast rendering, inverse lighting, relighting, data acquisition based on lighting analysis.
Expressive (“non-photorealistic”) rendering. Definition and identification of rendering styles. Style extraction from existing documents. Development of new view models (mixture of 3D and 2D) and new rendering techniques.
Model simplification and transformation. Simplification of geometry and appearance, image-based representations, model transformation for various applications, detail creation and creation of virtual models from real data.
Our target applications are dealing with 3D image synthesis, radiative transfer simulation, visualization, virtual and augmented reality, Illustration and computational photography. As application domains we are working on video games, animation movies, technical illustration, virtual heritage, lighting design, rehabilitation after a traumas...
The year 2009 was a highly productive one for the ARTIS team, with 17 publications being accepted or published in international journals and conferences. Among these achievements, we consider the following to be the highlights of the year 2009:
We had a total of five papers published in the international journal ACM Transactions on Graphics, the best journal in the field: two papers that were also accepted at the Siggraph 2009conference (a joint work with Cornell University on low order scattering effects, see section , and a joint work with Columbia University and MIT on frequency analysis for motion blur, see section ), two papers that were also accepted at the Siggraph Asia 2009conference (a work on multi-scale image decompositon, see section , and a joint work with Adobe Research and MIT on Intrinsic Images, see section ; the fifth paper, about Frequency Analysis for Depth-of-Field effects, was also presented at the Siggraph 2009conference.
The objectives of ARTIS combine the resolution of “classical”, but difficult, issues in Computer Graphics, with the development of new approaches for emerging applications. A transverse objective is to develop a new approach to synthetic image creation that combines notions of geometry, appearance, style and priority.
Complete set of lighting effects in a scene, including shadows and multiple reflections or scattering
Calculation process in which an image formation model is inverted to recover scene parameters from a set of images
The classical approach to render images of three-dimensional environments is based on modeling the interaction of light with a geometric object model. Such models can be entirely empirical or based on true physical behavior when actual simulations are desired. Models are needed for the geometry of objects, the appearance characteristics of the scene (including light sources, reflectance models, detail and texture models...) and the types of representations used (for instance wavelet functions to represent the lighting distribution on a surface). Research on lighting and rendering within ARTIS is focused on the following two main problems: lighting simulation and inverse rendering.
Although great progress has been made in the past ten years in terms of lighting simulation algorithms, the application of a general global illumination simulation technique to a very complex scene remains difficult. The main challenge in this direction lies in the complexity of light transport, and the difficulty of identifying the relevant phenomena on which the effort should be focused.
The scientific goals of ARTIS include the development of efficient (and “usable”) multi-resolution simulation techniques for light transport, the control of the approximations incurred (and accepted) at all stages of the processing pipeline (from data acquisition through data representation, to calculation), as well as the validation of results against both real world cases and analytical models.
There are two distinct aspects to realism in lighting simulation: First the physical fidelity of the computed results to the actual solution of the lighting configuration; Second the visual quality of the results. These two aspects serve two different application types: physical simulation and visually realistic rendering.
For the first case, ARTIS' goal is to study and develop lighting simulation techniques that allow incorporation of complex optical and appearance data while controlling the level of approximation. This requires, among other things, the ability to compress appearance data, as well as the representation of lighting distributions, while ensuring an acceptable balance between the access time to these functions (decompression) which has a direct impact on total computation times, and memory consumption.
Obtaining a visuallyrealistic rendering is a drastically different problem which requires an understanding of human visual perception. One of our research directions in this area is the calculation of shadows for very complex objects. In the case of a tree, for example, computing a visually satisfactory shadow does not generally require an exact solution for the shadow of each leaf, and an appropriately constrained statistical distribution is sufficient in most cases.
Computation efficiency practically limits the maximum size of scenes to which lighting simulation can be applied. Developing hierarchical and instantiation techniques allows us to treat scenes of great complexity (several millions of primitives). In general the approach consists in choosing among the large amount of detail representing the scene, those sites, or configurations, that are most important for the application at hand. Computing resources can be concentrated in these areas, while a coarser approximation may be used elsewhere.
Our research effort in this area is two-fold: first we develop new algorithms for a smarter control of variance in Monte-Carlo algorithms, hence reducing the total cost at equivalent accuracy; secondly, we develop algorithms that specifically suit a GPU implementation, which brings us a huge gain in performance at the expense of controlled approximations.
One of the fundamental goals of ARTIS is to improve our understanding of the mathematical properties of lighting distributions ( i.e.the functions describing light “intensity” everywhere). Some of these properties are currently “known” as conjectures, for instance the unimodality (existence of a single maximum) of the light distribution created by a convex light source on a receiving surface. This conjecture is useful for computing error bounds and thus guiding hierarchical techniques. Other interesting properties can be studied by representing irradiance as convolution splines, or by considering the frequency content of lighting distributions. We also note that better knowledge and characterization of lighting distributions is beneficial for inverse rendering applications as explained below.
Considering the synthetic image creation model as a calculation operating on scene characteristics (viewing conditions, geometry, light sources and appearance data), we observe that it may be possible to invert the process and compute some of the scene characteristics from a set of images.
This can only be attempted when this image calculation process is well understood, both at the theoretical level and at a more practical level with efficient software tools. We hope that the collective experience of lighting simulation and analysis accumulated by members of the project will guide us to develop efficient and accurate inverse rendering techniques: instead of aiming for the most general tool, we recognize that particular application cases involve specific properties or constraints that should be used in the modeling and inversion process.
Example applications include the reconstruction of 3D geometry by analyzing the variations of lighting and/or shadows, or the characterization of a light source from photographs of a known object.
.
There is no reason to restrict the use of computers for the creation and display of images to the simulation of real lighting. Indeed it has been recognized in recent years that computer processing opens fascinating new avenues for rendering images that convey particular views, emphasis, or style. These approaches are often referred to as “non-photorealistic rendering”, although we prefer the term “expressive rendering” to this negative definition.
A fundamental goal of ARTIS is to propose new image creation techniques that facilitate the generation of a great variety of images from a given scene, notably by adapting rendering to the current application. This involves, in particular, significant work on the notion of relevance, which is necessarily application-dependent. Relevance is the relative importance of various scene elements, or their treatment, for the desired result and it is necessary to define relevance both qualitatively and quantitatively. Examples of specific situations may include rendering specular effects, night-time imagery, technical illustration, computer-assisted drawing or sketching, etc. The notion of relevance will also have to be validated for real applications, including virtual reality settings.
Another research direction for expressive rendering concerns rendering styles: in many cases it should be possible to define the constitutive elements of styles, allowing the application of a given rendering style to different scenes, or in the long term the capture of style elements from collections of images.
Finally, since the application of expressive rendering techniques generally amounts to a visual simplification, or abstraction, of the scene, particular care must be taken to make the resulting images consistent over time, for interactive or animated imagery.
Computational Photography refers to techniques that aim at improving the capabilities of digital photofraphy. It has become a really hot research topic which lies at the intersection of illumination computation, vision and expressive rendering. These techniques may be used to enhance images in several ways. application examples include image restauration, automatic colorization, relighting or tone mapping. The ARTIS team is thus naturally attracted to this area.
We base our research on the following principles:
In all our target applications, it is crucial to control the level of approximation, for instance through reliable error bounds. Thus, all simplification techniques, either concerning geometry or lighting, require a precise mathematical analysis of the solution properties.
We seek to develop representations affording a controllable balance between these conflicting goals. In particular this applies to multi-resolution techniques, where an appropriate generic process is defined, that can then be applied to “well chosen” levels of the hierarchy. This aspect is of course key to an optimal adaptation to the chosen application context, both for lighting simulations of geometric transformations and for simplification.
Modeling geometric shapes, appearance data and various phenomena is the most tedious task in the creation process for virtual scenes. In many cases it can be beneficial to analyze real documents or scenes to recover relevant parameters. These parameters can then be used to model objects, their properties (light sources, reflectance data...) or even more abstract characteristics such as rendering styles. Thus this idea of parameter extraction is present in most of our activities.
In all our applications we try to keep in mind the role of the final user in order to offer intuitive controls over the result. Depending on the targeted goal we seek a good compromise between automation and manual design. Moreover we put the user into the research loop as much as possible via industrial contracts and collaboration with digital artists.
Although it has long been recognized that the visual channel is one of the most effective means for communicating information, the use of computer processing to generate effective visual content has been mostly limited to very specific image types: realistic rendering, computer-aided cell animation, etc.
The ever-increasing complexity of available 3d models is creating a demand for improved image creation techniques for general illustration purposes. Recent examples in the literature include computer systems to generate road maps, or assembly instructions, where a simplified visual representation is a necessity.
Our work in expressive rendering and in relevance-guided rendering aims at providing effective tools for all illustration needs that work from complex 3d models. We also plan to apply our knowledge of lighting simulation, together with expressive rendering techniques, to the difficult problem of sketching illustrations for architectural applications.
Video games represent a particularly challenging domain of application since they require both real-time interaction and high levels of visual quality. Moreover, video games are developed on a variety of platforms with completely different capacities. Automatic generation of appropriate data structures and runtime selection of optimal rendering algorithms can save companies a huge amount of development ( e.g.the EAGL library used by Electronic Arts ).
More generally, interactive visualization of complex data (e.g. in scientific engineering) can be achieved only by combining various rendering accelerations (e.g. visibility culling, levels of details, etc.), an optimization task that is hard to perform “by hand” and highly data dependent. One of ARTIS' goals is to understand this dependence and automate the optimization.
Virtual heritage is a recent area which has seen spectacular growth over the past few years. Archeology and heritage exhibits are natural application areas for virtual environments and computer graphics, since they provide the ability to navigate 3D models of environments that no longer exist and can not be recorded on a videotape. Moreover, digital models and 3D renderings give the ability to enrich the navigation with annotations.
Our work on style has proved very interesting to architects who have a long habit of using hand-drawn schemas and wooden models to work and communicate. Wooden models can advantageously be replaced by 3D models inside a computer. Drawing, on the other hand, offers a higher level of interpretation and a richness of expression that are really needed by architects, for example to emphasize that such model is an hypothesis.
By investigating style analysis and expressive rendering, we could “sample” drawing styles used by architects and “apply” them to the rendering of 3D models. The computational power made available by computer assisted drawing can also lead to the development of new styles with a desired expressiveness, which would be harder to produce by hand. In particular, this approach offers the ability to navigate a 3D model while offering an expressive rendering style, raising fundamental questions on how to “animate” a style.
ARTIS insists on sharing the software that is developed for internal use. These are all listed in a dedicated section on the web site
http://
.
libQGLViewer is a library that provides tools to efficiently create new 3D viewers. Simple and common actions such as moving the camera with the mouse, saving snapshots or selecting objects
are
notavailable in standard APIs, and libQGLViewer fills this gap. It merges in a unified and complete framework the tools that every one used to develop individually. Creating a new 3D
viewer now requires 20 lines of cut-pasted code and 5 minutes. libQGLViewer is distributed under the GPL licence since January 2003, and several hundreds of downloads are recorded each month
.
PlantRad is a software program for computing solutions to the equation of light equilibrium in a complex scene including vegetation. The technology used is hierarchical radiosity with clustering and instantiation. Thanks to the latter, PlantRad is capable of treating scenes with a very high geometric complexity (up to millions of polygons) such as plants or any kind of vegetation scene where a high degree of approximate self-similarity permits a significant gain in memory requirements. Its main domains of applications are urban simulation, remote sensing simulation (See the collaboration with Noveltis, Toulouse) and plant growth simulation, as previously demonstrated during our collaboration with the LIAMA, Beijing.
In the context of the European project RealReflect, the ARTIS team has developed the HQR software based on the photon mapping method which is capable of solving the light balance equation
and of giving a high quality solution. Through a graphical user interface, it reads X3D scenes using the X3DToolKit package developed at ARTIS, it allows the user to tune several parameters,
computes photon maps, and reconstructs information to obtain a high quality solution. HQR also accepts plugins which considerably eases the developpement of new algorithms for global
illumination, those benefiting from the existing algorithms for handling materials, geometry and light sources. HQR is freely available for download
.
The MobiNet software allows for the creation of simple applications such as video games, virtual physics experiments or pedagogical math illustrations. It relies on an intuitive graphical
interface and language which allows the user to program a set of mobile objects (possibly through a network). It is available in public domain
The main aim of MobiNet is to allow young students at high school level with no programming skills to experiment, with the notions they learn in math and physics, by modeling and simulating simple practical problems, and even simple video games. This platform has been massively used during the Grenoble INP "engineer weeks" since 2002: 150 senior high school pupils per year, doing a 3 hour practice. This work is partly funded by Grenoble INP. Various contacts are currently developed in the educational world. Besides "engineer weeks", several groups of "monitors" PhD students conducts experimentations based on MobiNet with a high scool class in the frame of the courses. Moreover, presentation in workshops and institutes are done, and a web site repository is maintained.
.
Basilic is a tool that automates the diffusion of research results on the web. It automatically generates the publication part of a project web site, creating index pages and web pages
associated to each publication. These pages provide access to the publication itself, its abstract, associated images and movies, and anything else via web links
All bibtex related information is stored in a database that is queried on the fly to generate the pages. Everyone can very easily and quickly update the site, thus guaranteeing an up-to-date
web site. BibTeX and XML exports are available, and are for instance used to generate the bibliography of this activity report. Basilic is released under the GPL licence and is freely available
for download
.
This program provides parsers and utility functions for the BibTeX file format. The core of the library is a C++ compiled as a library. Based on this library, bindings for different languages are provided using SWIG.
The long term goal is to replace the bibtex program and its associated BST language for style files by a more recent and powerful scripting language (such a Python, Ruby, Php, Perl...) or by
Java. The other goal is to allow easy writing of BibTeX related tools such as converters to other format. XdkBibTeX is used by Basilic to import from bibtex files. XdkBibTeX is released under
the GPL licence and is freely available for download
Freestyle is a software for Non-Photorealistic Line Drawing rendering from 3D scenes. It is designed as a programmable interface to allow maximum control over the style of the final drawing: the user "programs" how the silhouettes and other feature lines from the 3D model should be turned into stylized strokes using a set of programmable operators dedicated to style description. This programmable approach, inspired by the shading languages available in photorealistic renderers such as Pixar's RenderMan, overcomes the limitations of integrated software with access to a limited number of parameters and permits the design of an infinite variety of rich and complex styles. The system currently focuses on pure line drawing as a first step. The style description language is Python augmented with our set of operators. Freestyle was developed in the framework of a research project dedicated to the study of stylized line drawing rendering from 3D scenes. This research has lead to two publications , .
In 2008, Freestyle get a new life, completely outside ARTIS or INRIA: it was the basis of one of the 6
Google Summer of Codeprojects awarded to the
Blender Foundation
.
We provide an implementation of the vector drawing tool described in the 2008 Diffusion Curves Siggraph paper. This prototype is composed of the Windows binary, along with the required
shader programs (ie. in source code). The software is available for download
.
TiffIO is a plug-in that add TIFF images read/write capabilities to all Qt3 and Qt4 applications using the refernce QImage class. TiffIO come with a self-test suite, and have been compiled and used successfully on a wide variety of systems, compilers and Qt version combination. A demo application enables to quickly test image loading and viewing on any platform. All TIFF operations are based on libtiff 3.8.0, this plugin is just a wrapper that enable to use it transparently from the QImage class, and the architecture defined by Qt.
TiffIO has been downloaded by a large number of developer, and integrated in a variety of commercial or internal tools, such as by
Pixar. TiffIO is freely available for download
.
The VRender library is a simple tool to render the content of an OpenGL window to a vectorial device such as Postscript, XFig, and soon SVG. The main usage of such a library is to make clean vectorial drawings for publications, books, etc.
In practice, VRender replaces the z-buffer based hidden surface removal of OpenGL by sorting the geometric primitives so that they can be rendered in a back-to-front order, possibly cutting them into pieces to solve cycles.
VRender is also responsible for the vectorial snapshot feature of the QGLViewer library. VRender is released under the LGPL licence and is freely available for download
.
SciPres is a system for creating animated presentations. It was inspired by Slithy, a python-based system developped by Douglas Zonker. In short, SciPres is to PowerPoint what LaTeX is to
Microsoft Word: you script your presentation using a text editor, rather than using a WYSIWYG system. SciPres is released under the GPL licence and is freely available for download
.
The simplistic pinhole camera model used to teach perspective (and computer graphics) produces sharp images because every image element corresponds to a single ray in the scene. Real-life optical systems such as photographic lenses, however, must collect enought light to accomodate the sensitivity of the imaging system, and therefore combine light rays coming through a finite-sized aperture. Focusing mechanisms are needed to choose the distance of an “in-focus" plane, which will be sharply reproduced on the sensor, while objects appear increasingly blurry as their distance to this plane increases. The visual effect of focusing can be dramatic and is used extensively in photography and film, for instance to separate a subject from the background.
Although the simulation of depth of field in Computer Graphics has been possible for more than two decades, this effect is still rarely used in practice because of its high cost: the lens aperture must be densely sampled to produce a high-quality image. This is particularly frustrating because the defocus produced by the lens is not increasing the visual complexity, but rather removing detail! In this paper, we propose to exploit the blurriness of out-of-focus regions to reduce the computation load. We study defocus from a signal processing perspective and propose a new algorithm that estimates local image bandwidth. This allows us to reduce computation costs in two ways, by adapting the sampling rate over both the image and lens aperture domain.
In image space, we exploit the blurriness of out-of-focus regions by downsampling them: we compute the final image color for only a subset of the pixels and interpolate. Our motivation for adaptive sampling over the lens comes from the observation that in-focus regions do not require a large number of lens samples because they do not get blurred, in contrast to out of focus regions where the large variations of radiance through the lens requires many samples. More formally, we derive a formula for the variance over the lens and use it to adapt sampling for a Monte-Carlo integrator. Both image and lens sampling are derived from a Fourier analysis of depth of field that extends recent work on light transport . In particular, we show how image and lens sampling correspond to the spatial and angular bandwidth of the lightfield.
Figure shows an example of applying our technique to a scene with high depth of field variations. As predicted, the spatial sampling density is high in the regions with high specularity or depth discontinuities, and the angular sampling density is high where un-focused pixels are the result of averaging high variance regions of the incoming illumination. Spatial samples therefore stick to regions with high spatial frequencies.
This paper was published in the journal ACM Transactions on Graphics and presented at the Siggraph'2009 conference.
Motion blur is crucial for high-quality rendering but is also very expensive. Our first contribution is a frequency analysis of motion-blurred scenes, including moving objects, specular reflections, and shadows. We show that motion induces a shear in the frequency domain, and that the spectrum of moving scenes is usually contained in a wedge. This allows us to compute adaptive space-time sampling rates, to accelerate rendering. For uniform velocities and standard axis-aligned reconstruction, we show that the product of spatial and temporal bandlimits or sampling rates is constant, independent of velocity. Our second contribution is a novel sheared reconstruction filter that tightly packs the wedge of frequencies in the Fourier domain, and enables even lower sampling rates (see Figure ). We present a rendering algorithm that computes a sheared reconstruction filter per pixel, without any intermediate Fourier representation. This often permits synthesis of motion-blurred images with far fewer rendering samples than standard techniques require (see Figure ). This work was published at the Siggraph 2009 conference, and in the journal ACM Transactions on Graphics .
Light scattering in refractive media is an important optical phenomenon for computer graphics. While recent research has focused on multiple scattering, there has been less work on accurate solutions for single or low-order scattering. Refraction through a complex boundary allows a single external source to be visible in multiple directions internally with different strengths; these are hard to find with existing techniques. This paper presents techniques to quickly find paths that connect points inside and outside a medium while obeying the laws of refraction. We introduce: a half-vector based formulation to support the most common geometric representation, triangles with interpolated normals; hierarchical pruning to scale to triangular meshes; and, both a solver with strong accuracy guarantees, and a faster method that is empirically accurate. A GPU version achieves interactive frame rates in several examples. See Figure for our results, and Figure for comparison with other rendering methods.
.
In the context of the GENAC2 project, we have designed an algorithm for computing indirect illumination in screen space in the context of video games. In such a context, the most important criteria are the stability of the cost over time, the speed, and lack of noise and artifacts, whereas the mathematical accuracy of the computation is not very important.
We thus designed a screen-space hierarchical algorithm for computing indirect lighting for animated scenes. Our algorithm is fully compatible with deferred-shading rendering engines for video games, and computes indirect lighting in less than 10 ms, leaving enough computation time for other gaming tasks, such as interaction and animation (See Figure ). Our algorithm works in two steps: first, we compute indirect illumination in screen-space at all possible scales, then we filter and combine together the illumination received at the different scales. Particular care was taken in verifying the practical integration of this technique inside a commercial video-game rendering engine, provided by our project partner in the GENAC2 project.
A paper describing this technique was submitted to the Interactive 3D Graphics 2010 conference (Evaluation still ongoing).
.
Cyril Crassin pursue his PhD thesis with Fabrice Neyret, on the the real-time rendering of very large and detailed volumes, taking advantage of GPU-adapted data-structure and algorithms. The main target corresponds to the cases where detail is concentrated at the interface between free space and clusters of density found in many natural volume data such as cloudy sky or vegetation, or data represented as generalized parallax maps, hypertextures or volumetric textures.
The new method is based on a dynamic
N3tree storing MIP-mapped 3D texture bricks in its leaves. We load on the fly on GPU only the necessary bricks at the necessary resolution, taking into account visibility. This maintains
low memory consumption during interactive exploration and minimizes data transfer (See Figure
). Our ray marching algorithm benefits from the multiresolution aspect of our
data structure and provides real-time performance.
A paper has been published at the ACM Symposium on Interactive 3D Graphics and Games 2009 , (also presented as a poster and a sketch at Siggraph'09), and a version is accepted as a book chapter in GPU Pro (ShaderX 8). We have also be contacted by several game, visualisation and special effect companies interested in the technology, and we have been invited to do technical presentations at Intel “Visual Computing and Research Conference” (Saarbrücken, dec 09) and at “Crytek Academy Conference” (Francfort, nov 09). We are also in close contact with nVIDIA, where Cyril did a long stay in the context of Eurodoc founding programm.
.
Many recent games and applications target the interactive exploration of realistic large scale worlds. These worlds consist mostly of static terrain models, as the simulation of animated fluids in these virtual worlds is computation- ally expensive. Adding flowing fluids, such as rivers, to these virtual worlds would greatly enhance their realism, but causes specific issues: as the user is usually observing the world at close range, small scale details such as waves and ripples are important.
However, the large scale of the world makes classical methods impractical for simulating these effects. We found an algorithm for the interactive simulation of realistic flowing fluids in large virtual worlds. Our method relies on two key contributions: the local computation of the velocity field of a steady flow given boundary conditions, and the advection of small scale details on a fluid, following the velocity field, and uniformly sampled in screen space. This is illustrated in Figure .
This work, done in collaboration with the Evasionproject-team in the scope of the PhD thesis of Qizhi Yu (supervised by Fabrice Neyret and Eric Bruneton), was accepted in the Eurographics 2009conference, and published in the Computer Graphics Forumjournal .
.
With this work we proposed a real-time method to animate complex scenes of thousands of trees under a user-controllable wind load. Firstly, modal analysis is applied to extract the main modes of deformation from the mechanical model of a 3D tree. The novelty of our contribution is to precompute a new basis of the modal stress of the tree under wind load. At runtime, this basis allows to replace the modal projection of the external forces by a direct mapping for any directional wind. We showed that this approach can be efficiently implemented on graphics hardware. This modal animation can be simulated at low computation cost even for large scenes containing thousands of trees (Figure ).
This work, done in collaboration with the
Evasionproject-team and the
LadHyX
In collaboration with Eric Bruneton at Evasion project-team, we developped a new algorithm for modelling, animation, illumination and rendering of the ocean, in real-time, at all scales and for all viewing distances. Our algorithm is based on a hierarchical representation, combining geometry, normals and BRDF. For each viewing distance, we compute a simplified version of the geometry, and encode the missing details into the normal and the BRDF, depending on the level of detail required. We then use this hierarchical representation for illumination and rendering. Our algorithm runs in real-time, and produces highly realistic pictures and animations (see Figure ). This work has been accepted for publication at the next Eurographics conference (Eurographics 2010).
.
.
Inspired by work on Huang et al.on empirical mode decomposition of 1D signals, we have designed an equivalent method for decomposing images into layers of details. Our method has a very unique advantage over existing algorithms of layer decomposition: the notion of details we use does not depend on contrast but rather on image-space frequency. Besides, our decomposition preserves edges, even when their amplitude is lower than the amplitude of surrounding details.
To achieve this, we base our decomposition algorithm on the extraction of an upper and a lower envelop image which respectively interpolate local maxima and minima. Each interpolation is made edge-preserving by examining the local variance between maxima (resp. minima). We then average the two envelops to obtain an edge-preserving smoothing of the input image. The detail layer is extracted by removing the smoothed image from the input image. The algorithm can be performed hierarchically by feeding the smoothed layer to the decomposition algorithm again (Figure ).
Our decomposition method allows to perform a considerable number of interesting applications, including from multi-scale contrast enhancement, noise and detail removal, tone mapping and hatch-to-tone filtering. This method has beed published to Siggaph Asia 2009 .
.
In the context of an intership at MIT, Adrien Bousseau has work with Frédo Durant and Sylvain Paris on the decomposition of a photograph into the product of an illumination component that represents lighting effects and a reflectance component that is the color of the observed material. This is an under-constrained problem and automatic methods are challenged by complex natural images.
In this work, we describe a new approach that enables users to guide an optimization with simple indications such as regions of constant reflectance or illumination. Based on a simple assumption on local reflectance distributions, we derive a new propagation energy that enables a closed form solution using linear least-squares. We achieve fast performance by introducing a novel downsampling that preserves local color distributions. We demonstrate intrinsic image decomposition on a variety of images and show applications.
.
.
In this research , we found a new method for the analysis and resynthesis of a specific class of textures, that we refer to as ”high-level stochastic textures”. Such textures consist of distributions from different and potentially overlapping shapes, or ”patterns”. The constituent distributions may be random or they may obey some geometric placement rule. As a result they fall into the class termed, high-level stochastic textures, that comprises both arrangements of 2d primitives as well as near regular textures.
Our proposed method first aims at extracting and capturing the distribution of relevant shapes. To achieve this, we rely on their repetition throughout the input sample. Once this analysis step is performed, it then becomes possible to resynthesize visually similar and tileable textures by reusing the obtained patterns. This is in complete contrast to classic pixel-based Markovian approaches, where such long-range structure preserving synthesis is hard if not impossible.
Qizhi Yu and al. have developped a Lagrangian model of texture advection, to be used for advecting small water surface details while preserving their spectral characteristics (see Figure ). Our particles are distributed according to animated Poisson-disk, and carry a local grid mesh which is distorted by advection and regenerated when a distorsion metrics is passed. This Lagrangian approach solve the problem of local-adaptive regeneration rate, provide a better spectrum and better motion illusion, and avoid the burden of blending several layers.
.
Many non-photorealistic rendering approaches aim at depicting 3D scenes with styles that are traditionally produced on 2D media like paper. The main difficulty suffered by these methods is temporal coherence when stylizing dynamic scenes. This problem arises from the contrary goals of depicting a 3D motion while preserving the 2D characteristics inherent to any style marks (pigments, strokes, etc). Achieving these goals without introducing visual artifacts implies the concurrent fulfillment of three constraints. First, the style marks should have a constant size and density in the image in order to preserve the 2D appearance of the medium. Second, the style marks should follow the motion of the 3D objects they depict to avoid the sliding of the style features over the 3D scene. Finally, a sufficient temporal continuity between adjacent frames is required to avoid popping and flickering.
We propose dynamic textures, a method that facilitates the integration of temporally coherent stylization in real-time rendering pipelines.This method uses textures as simple data structures to represent style marks. This makes our approach especially well suited to media with characteristic textures (eg. watercolor, charcoal, stippling), while ensuring real-time performances due to the optimized texture management of modern graphic cards. Central to our technique is an object space infinite zoom mechanism that guarantees a quasi-constant size and density of the texture elements in screen space for any distance from the camera. This simple mechanism preserves most of the 2D appearance of the medium supported by the texture while maintaining a strong temporal coherence during animation.
This work was awarded by the best paper priceduring the 21th AFIG conference and has been presented at the ACM Symposium I3D 2009 .
.
We present a technique for the analysis and re-synthesis of 2D arrangements of stroke-based vector elements. A posteriorianalysis of a user's inputs as a way to capture his/her intent poses a formidable challenge. By-example approaches could yet easily become some of the most intuitive use metaphors and greatly alleviate creation process efforts. Here, we propose to tackle this issue from a statistical point of view and take specific care of accounting for information usually overlooked in previous research, namely the elements' very appearance. Composed of curve-like strokes, we describe elements by a concise set of perceptually relevant features. After detecting appearance dominant traits, we can generate new arrangements that respect the captured appearance-related spatial statistics using multitype point processes. Our method faithfully reproduces visually similar arrangements and relies on neither heuristics nor post-processes to ensure statistical correctness. This work has been published at NPAR 2009 .
.
Texture fractalization is used in many existing approaches to ensure the temporal coherence of a stylized animation. This works presents the results of a psychophysical user-study evaluating the relative distortion induced by a fractalization process of typical medium textures. We perform a ranking experiment, assess the agreement among the participants and study the criteria they used. Finally we show that the average co-occurrence error is an efficient quality predictor in this context.
This work has been published in APGV 09 (symposium on Applied perception in graphics and visualization) .
.
We present a 2D vector image representation with native support for textures, complete with tools for creating and manipulating these textures. Particularly, we describe methods for applying the textures directly to an image, without requiring full 3D information, a process we call texture-draping. Since the image representation is vectorial, it is compact and remains editable with convenient, high-level editing tools, designed to support common artistic workflows. While we focus on regular and near-regular textures, our representation can be extended to handle other types of textures. We illustrate our approach with several textured vector images, created by an artist using our system.
This work has been presented at EGSR 2009 (Eurographics Symposium on Rendering) and is part of a collaboration with Adobe.
.
Both in real life and in virtual environments, the control of gaze and balance strongly depend on the processing of visual cues. We started a new research subject to use objective human balance measurements (force/torque applied by the feet, muscular activations, postural motion capture) to asses impact of projected visual images on an immersed subject. For non-photo realistic rendering, experimental setups should allow objective measurements of different rendering styles effectiveness, and might be useful to evaluate different policies for levels of abstraction.
In 2009, we focus on another type of application in the medical domain: human balance also strongly depends on two other sensory signals: vestibular and muscular proprioception (internal hears and musculoskeletal feedback).
We published a first study , , that assessed the specific effects of the dynamic 2D and 3D visual inputs on the oculomotor and balance reactive control. Thirteen subjects were immersed in a virtual environment using different 2D/3D visual flow conditions. Analysis of eye movement and postural adjustments shows that 2D and 3D flows induce specific measurable behavioral responses.
This results will allow us to go further in medical applications, and to conceive an immersive virtual visual environment for diagnosis and treatment of balance troubles. For this, we obtain in 2009 a clinical research fund from the DIRC ( Direction de la recherche clinique et de l'innovation du CHU Grenoble Nord) to buildup a clinical experiment setup in order to validate the concept.
.
We have started a cooperative research work with two video game companies in Lyon, Eden Games and WideScreen Games, in cooperation with the EVASION research team of INRIA Rhône-Alpes and the LIRIS research laboratory in Lyon. This cooperation is funded by the french “Fonds de Compétitivité des Entreprises”, the “Pole de Compétitivité” Imaginove in Lyon, the Région Rhône-Alpes, the city of Lyon and the Grand Lyon urban area. The research themes for ARTIS are global illumination in real-time for video games. This project started in September 2007, for 24 months. The final presentation happenned on December 8, 2009.
.
The GARDEN project a cooperative research work with the video game company EDEN Games in Lyon. This cooperation is funded by the french “Fonds de Compétitivité des Entreprises”, the “Pole de Compétitivité” Imaginove in Lyon, the Région Rhône-Alpes, the city of Lyon and the Grand Lyon urban area. The research themes for ARTIS are real time rendering of complex materials, vegetation and human bodies for video games. This project started in March 2009, for 24 months.
.
We have started a cooperation with David Salesin of the Adobe Research Center in Seattle. This cooperation has resulted in extended stays of Alexandrina Orzan and Adrien Bousseau in Seattle in 2008, and two publications. In 2009, we continued this cooperation, with two new publications accepted, including one at the Siggraph Asia 2009 conference , .
.
We are funded by the ANR research program “Blanc” (research in generic directions) for a joint research work with the Jean Ponce (Ecole Normale Supérieure) and Adrien Bartoli (University Blaise Pascal — Clermont II), on acquisition, modelling and rendering of Image-Based Objects, with a focus on high-quality and interactive rendering. This grant started in September 2007, for 36 months.
.
We are funded by the MDCO (Large Datasets and Knowledge) research program of the ANR, for a joint research project with the LIRIS research laboratory (Lyon) and the LSIIT research laboratory (Strasbourg), on acquisition, rendering and relighting of real objects for their inclusion in virtual scenes. This grant started in September 2007, for 36 months.
.
We are funded by the ANR research program RIAM (grants in multimedia projects) for a joint industrial project with two production studios: Neomis Animationand BeeLight, two other INRIA project-teams: Bipopand Evasionand a CNRS lab (Institut Jean Le Rond d'Alembert de l'Université Pierre et Marie Curie). The goal of this project is to provide rendering and animating tools of hairs for movie making. The grant started in September 2007, for 36 month.
.
We are funded by the ANR research program “jeune chercheur” (grants for young research leaders) for a joint research projet with the IPARLAINRIA project-team in Bordeaux. The goal is to develop expressive rendering models for 2D and 3D animations. The grant started in September 2007, for 36 month.
LIMA (Loisirs et Images Numériques) is a project from the Cluster ISLE (Informatique, Signal et Logiciel Embarqué). The ARTIS team is part of the LIMA project, and cooperates with other teams in the project for Numerical Images.
INRIAs office of international relations has set up in 2001 a program for associated teams that bring together an INRIA project-team and a foreign research team, providing a form of institutional recognition to stable scientific collaborations involving several researchers on each side.
An associated team was created for the 2003-2005 period between ARTIS and the MIT graphics group (CSAIL Lab) on the subject of Flexible Rendering. It has been extended for 2006, 2007 and 2008, then extended one last time in 2009 (with no budget). This association, now in its seventh year, has been extremely positive: several research actions (described above in the results sections) have been performed jointly with MIT researchers.
The associated team has helped this collaboration on a practical level by providing funding for researcher exchanges. The two teams know each other very well and frequent visits and conference calls make actual cooperation a reality (for instance publications , , , , , , , are co-signed by researchers from the two groups).
The activity of the associate team in 2009 focused on the following three items:
We continued our research project on the frequency analysis of light transport, with an emphasis on the development of practical algorithms for the evaluation of frequency spectra in photon tracing. This work has resulted in two papers accepted for publication in ACM Transactions on Graphics , .
Adrien Bousseau conducted a long term stay at MIT (3 months), working with Fredo Durand on Computational Photography. They are working on the separation of a photograph in two composants: lighting and color. This work offers many potential applications for image editing softwares, and we are cooperating with the Adobe Research center in Boston, especially with Sylvain Paris, a former ARTIS Ph. D. student. This work has resulted in a common publication at Siggraph Asia 2009 .
We have started a new research direction on edge-preserving multiscale decomposition. Our algorithm takes as input a picture, and produces a multiscale representation of this picture, separating fine details, medium details and large-scale details. Our algorithm works by detecting oscillations in the pictures. The main advantage is that it is edge-preserving, that is even the large-scale version does not blur the edges. This work resulted in a common publication at Siggraph Asia 2009 .
In 2009, we have started a new Associate Team with the Program of Computer Graphics at Cornell University, on the subject of new challenges in Photorealistic rendering.
.
The Region Rhône-Alpes has established a grant program to help PhD students in starting actual international cooperation during their PhD years, with support for a six month stay in a lab in a foreign country.
Cyril Crassin obtained a founding for a six month stay at nVidia-R&D, UK, to work on adaptation of data structures and algorithms to high-performance computing on GPU (which nVidia is the world leader constructor).
Visual communication greatly benefits from the large variety of appearances that an image can take. By neglecting spurious details, simplified images focus the attention of an observer on the essential message to transmit. Stylized images, that depart from reality, can suggest subjective or imaginary information. More subtle variations, such as change of lighting in a photograph can also have a dramatic effect on the interpretation of the transmitted message.
The goal of this thesis is to allow users to manipulate visual content and create images that corresponds to their communication intent. We propose a number of manipulations that modify, simplify or stylize images in order to improve their expressive power.
We first present two methods to remove details in photographs and videos. The resulting simplification enhances the relevant structures of an image. We then introduce a novel vector primitive, called Diffusion Curves, that facilitates the creation of smooth color gradients and blur in vector graphics. The images created with diffusion curves contain complex image features that are hard to obtain with existing vector primitives. In the second part of this manuscript we propose two algorithms for the creation of stylized animations from 3D scenes and videos. The two methods produce animations with the 2D appearance of traditional media such as watercolor. Finally, we describe an approach to decompose the illumination and reflectance components of a photograph. We make this ill-posed problem tractable by propagating sparse user indications. This decomposition allows users to modify lighting or material in the depicted scene.
The various image manipulations proposed in this dissertation facilitates the creation of a variety of visual representations, as illustrated by our results.
This thesis proposes a novel image primitive - the diffusion curve. This primitive relies on the principle that images can be defined via their discontinuities, and concentrates image features along contours. The diffusion curve can be defined in vector graphics, as well as in raster graphics, to increase user control during the process of art creation. The vectorial diffusion curve primitive augments the expressive powers of vector images by capturing complex spatial appearance behaviors. Diffusion curves represent a simple and easy-to-manipulate support for complex content representation and edition. In raster images, diffusion curves define a higher level structural organization of the pixel image. This structure is used to create simplified or exaggerated representations of photographs in a way consistent with the original image content. Finally, a fully automatic vectorization method is presented, that converts raster diffusion curve to vector diffusion curve.
This thesis deals with real time image synthesis, and especially efficient rendering of visual detail, which is a source of realism in synthetic images. To face the complexity of visual detail, one needs specific representations adapted to both the represented data (e.g. geometry, materials) and the hardware capabilities. The first contribution of this thesis is a technique for representing geometric detail using relief textures. Two algorithms have been derived to render such height field data on the GPU: the first one accepts dynamic input while the second one produces even improved results on static scenes, at the expense of some pre-computation. These techniques are also extended to represent water and light interaction in water in real time. A second contribution of this thesis is to derive adapted representations of far away objects, to allow rendering the appearance of an object without dealing with complex geometry, from the lightfield data of this object. This is for instance applied to models of trees. The thesis was defended on November 12, 2009.
The proper diffusion of scientific results is an important part of their value. Since most of this diffusion is done using the web, a new bibliography server has been developed to ease this
diffusion
Most of the members of the team (faculty members as well as Ph. D. students) give courses. This educational effort is also present in the distribution of libraries such as libQGLViewer, which have a real pedagogical interest since they simplify and explain the creation of computer graphics images. The project is also involved in the animation of the “ Fête de la Science” (scientific vulgarization event), and is often consulted for its scientific expertise.
Nicolas Holzschuch is:
Member of the program committee of EGSR 2009 and EG 2010,
Member of the program committee of the Eurographics Symposium on Rendering 2010
Member of the “Commission d'evaluation” of INRIA.
Joëlle Thollot is:
Member of the program committee of NPAR 2009,
Member of the “Comité de sélection” of Strasbourg university,