ARTIS is both an INRIA project-team and a subset of the LJK (UMR 5224), a joint research lab of CNRS, Université Joseph Fourier Grenoble-I (UJF), Université Pierre Mendès France Grenoble II (UPMF) and Institut National Polytechnique de Grenoble (INPG).

ARTIS was created in January, 2003, based on the observation that current image synthesis methods appear to provide limited solutions for the variety of current applications. The classical approach to image synthesis consists of separately modeling a 3D geometry and a set of photometric properties (reflectance, lighting conditions), and then computing their interaction to produce a picture. This approach severely limits the ability to adapt to particular constraints or freedoms allowed in each application (such as precision, real-time, interactivity, uncertainty about input data...). Furthermore, it restricts the classes of possible images and does not easily lend itself to new uses such as illustration, where a form of hierarchy of image constituents must be constructed.

One of the goals of the project is the definition of a more generic framework for the creation of synthetic images, integrating elements of 3D geometry, of 2D geometry (built from 3D geometry), of appearance (photometry, textures...), of rendering style, and of importance or relevance for a given application. The ARTIS project-team therefore deals with multiple aspects of image synthesis: model creation from various sources of data, transformations between these models, rendering and imaging algorithms, and the adaptation of the models and algorithms to various constraints or application contexts. The main research directions in ARTIS address:

Analysis and simulation of lighting effects. Development of hierarchical simulation techniques integrating the most general and realistic effects, fast rendering, inverse lighting, relighting, data acquisition based on lighting analysis.

Expressive (“non-photorealistic”) rendering. Definition and identification of rendering styles. Style extraction from existing documents. Development of new view models (mixture of 3D and 2D) and new rendering techniques.

Model simplification and transformation. Simplification of geometry and appearance, image-based representations, model transformation for various applications, detail creation and creation of virtual models from real data.

Our target applications are dealing with 3D image synthesis, radiative transfer simulation, visualization, virtual and augmented reality and Illustration. As application domains we are working on video games, animation movies, technical illustration, virtual heritage, lighting design, rehabilitation after a traumas...

ARTIS had the following highlights for the year 2007:

Two publications were accepted at the conference SIGGRAPH 2007, the leading conference in Computer Graphics. One of them was about watercolor rendering for videos, in cooperation with David Salesin (University of Washington and Adobe Research), the other on Dynamic 2D patterns for shading 3D scenes, in cooperation with Lee Markosian (University of Michigan).

ARTS organized the Eurographics Symposium on Rendering in Grenoble, in June. The Symposium brought together 150 researchers of worldwide status for three days of high-level scientific presentations.

We had a cooperation with Studio Broceliande for creating a plug-in for MentalRay on watercolor rendering. This cooperation resulted in one patent.

The objectives of ARTIS combine the resolution of “classical”, but difficult, issues in Computer Graphics, with the development of new approaches for emerging applications. A transverse objective is to develop a new approach to synthetic image creation that combines notions of geometry, appearance, style and priority.

Complete set of lighting effects in a scene, including shadows and multiple reflections or scattering

Calculation process in which an image formation model is inverted to recover scene parameters from a set of images

The classical approach to render images of three-dimensional environments is based on modeling the interaction of light with a geometric object model. Such models can be entirely empirical or based on true physical behavior when actual simulations are desired. Models are needed for the geometry of objects, the appearance characteristics of the scene (including light sources, reflectance models, detail and texture models...) and the types of representations used (for instance wavelet functions to represent the lighting distribution on a surface). Research on lighting and rendering within ARTIS is focused on the following two main problems: lighting simulation and inverse rendering.

Although great progress has been made in the past ten years in terms of lighting simulation algorithms, the application of a general global illumination simulation technique to a very complex scene remains difficult. The main challenge in this direction lies in the complexity of light transport, and the difficulty of identifying the relevant phenomena on which the effort should be focused.

The scientific goals of ARTIS include the development of efficient (and “usable”) multiresolution simulation techniques for light transport, the control of the approximations incurred (and accepted) at all stages of the processing pipeline (from data acquisition through data representation, to calculation), as well as the validation of results against both real world cases and analytical models.

There are two distinct aspects to realism in lighting simulation: First the physical fidelity of the computed results to the actual solution of the lighting configuration; Second the visual quality of the results. These two aspects serve two different application types: physical simulation and visually realistic rendering.

For the first case, ARTIS' goal is to study and develop lighting simulation techniques that allow incorporation of complex optical and appearance data while controlling the level of approximation. This requires, among other things, the ability to compress appearance data, as well as the representation of lighting distributions, while ensuring an acceptable balance between the access time to these functions (decompression) which has a direct impact on total computation times, and memory consumption.

Obtaining a
*visually*realistic rendering is a drastically different problem which requires an understanding of human visual perception. One of our research directions in this area is the
calculation of shadows for very complex objects. In the case of a tree, for example, computing a visually satisfactory shadow does not generally require an exact solution for the shadow of
each leaf, and an appropriately constrained statistical distribution is sufficient in most cases.

Computation efficiency practically limits the maximum size of scenes to which lighting simulation can be applied. Developing hierarchical and instantiation techniques allows us to treat scenes of great complexity (several millions of primitives). In general the approach consists in choosing among the large amount of detail representing the scene, those sites, or configurations, that are most important for the application at hand. Computing resources can be concentrated in these areas, while a coarser approximation may be used elsewhere.

Our research effort in this area is mainly focused on light transfer simulation in scenes containing vegetation, for which we develop efficient instantiation-based hierarchical simulation algorithms.

One of the fundamental goals of ARTIS is to improve our understanding of the mathematical properties of lighting distributions (
*i.e.*the functions describing light “intensity” everywhere). Some of these properties are currently “known” as conjectures, for instance the unimodality (existence of a single
maximum) of the light distribution created by a convex light source on a receiving surface. This conjecture is useful for computing error bounds and thus guiding hierarchical techniques.
Other interesting properties can be studied by representing irradiance as convolution splines, or by considering the frequency content of lighting distributions. We also note that better
knowledge and characterization of lighting distributions is beneficial for inverse rendering applications as explained below.

Considering the synthetic image creation model as a calculation operating on scene characteristics (viewing conditions, geometry, light sources and appearance data), we observe that it may be possible to invert the process and compute some of the scene characteristics from a set of images.

This can only be attempted when this image calculation process is well understood, both at the theoretical level and at a more practical level with efficient software tools. We hope that the collective experience of lighting simulation and analysis accumulated by members of the project will guide us to develop efficient and accurate inverse rendering techniques: instead of aiming for the most general tool, we recognize that particular application cases involve specific properties or constraints that should be used in the modeling and inversion process.

Example applications include the reconstruction of 3D geometry by analyzing the variations of lighting and/or shadows, or the characterization of a light source from photographs of a known object.

There is no reason to restrict the use of computers for the creation and display of images to the simulation of real lighting. Indeed it has been recognized in recent years that computer processing opens fascinating new avenues for rendering images that convey particular views, emphasis, or style. These approaches are often referred to as “non-photorealistic rendering”, although we prefer the term “expressive rendering” to this negative definition.

A fundamental goal of ARTIS is to propose new image creation techniques that facilitate the generation of a great variety of images from a given scene, notably by adapting rendering to the
current application. This involves, in particular, significant work on the notion of
*relevance*, which is necessarily application-dependent. Relevance is the relative importance of various scene elements, or their treatment, for the desired result and it is necessary to
define relevance both qualitatively and quantitatively. Examples of specific situations may include rendering specular effects, night-time imagery, technical illustration, computer-assisted
drawing or sketching, etc. The notion of relevance will also have to be validated for real applications, including virtual reality settings.

Another research direction for expressive rendering concerns
*rendering styles*: in many cases it should be possible to define the constitutive elements of styles, allowing the application of a given rendering style to diffferent scenes, or in the
long term the capture of style elements from collections of images.

Finally, since the application of expressive rendering techniques generally amounts to a visual simplification, or abstraction, of the scene, particular care must be taken to make the resulting images consistent over time, for interactive or animated imagery.

Geometric models of a 3D scene are available from a variety of sources, including industrial partners. In our experience, more 3D geometry files lack all forms of high level information,
either because it was lost during format conversions, or because it was not defined by the designer of the model. On the other hand, most researchers working with 3D scene data would like to
use such high level information such as which groups of polygons form connex shapes, human regognisable objects, have symetries, or even which groups of polygons look like each other (also
known as
*instancing information*). We are working on algorithms to automatically retrieve some high level (also named
*semantic*) information from a
*polygon soup*,
*i.e*a list of polygons without any information about how these polygons are related to each other.

Creating images from three-dimensional models is a computationally –intensive task. A particularly difficult issue has long been the calculation of visibility information in 3D scenes. We are working on several issues related to visibility, such as the decomposition of a scene into appropriate regions (or cells) to assist in the precalculation of visibility relationships, or the precalculation of object sets visible from a particular view point or region of space.

More generally, we are interested in all aspects of geometric calculation that lead to the creation, simplification or transformation of 3D models. Complex scenes for virtual environments are typically assembled using data from very different sources, therefore coming in very different resolutions or amounts of detail. It is often a requirement to suppress unneeded detail in some parts of the scene, or to generate detail where it is missing. Given the very high cost of manual modeling, fully or semi-automated techniques are essential.

Furthermore, the apparent complexity and the amount of detail should also be adapted to the particular usage in the application, and we advocate that this can be realized by choosing appropriate data representations. We are therefore working on innovative data representations for 3D scenes, notably involving many image-based techniques.

We base our research on the following principles:

In all our target applications, it is crucial to control the level of approximation, for instance through reliable error bounds. Thus, all simplification techniques, either concerning geometry or lighting, require a precise mathematical analysis of the solution properties.

We seek to develop representations affording a controllable balance between these conflicting goals. In particular this applies to multiresolution techniques, where an appropriate generic process is defined, that can then be applied to “well chosen” levels of the hierarchy. This aspect is of course key to an optimal adaptation to the chosen application context, both for lighting simulations of geometric transformations and for simplification.

Modeling geometric shapes, appearance data and various phenomena is the most tedious task in the creation process for virtual scenes. In many cases it can be beneficial to analyse real documents or scenes to recover relevant parameters. These parameters can then be used to model objects, their properties (light sources, reflectance data...) or even more abstract characteristics such as rendering styles. Thus this idea of parameter extraction is present in most of our activities.

In all our applications we try to keep in mind the role of the final user in order to offer intuitive controls over the result. Depending on the targeted goal we seek a good compromise between automation and manual design. Moreover we put the user into the research loop as much as possible via industrial contracts and collaboration with digital artists.

Although it has long been recognized that the visual channel is one of the most effective means for communicating information, the use of computer processing to generate effective visual content has been mostly limited to very specific image types: realistic rendering, computer-aided cell animation, etc.

The ever-increasing complexity of available 3d models is creating a demand for improved image creation techniques for general illustration purposes. Recent examples in the literature include computer systems to generate road maps, or assembly instructions, where a simplified visual representation is a necessity.

Our work in expressive rendering and in relevance-guided rendering aims at providing effective tools for all illustration needs that work from complex 3d models. We also plan to apply our knowledge of lighting simulation, together with expressive rendering techniques, to the difficult problem of sketching illustrations for architectural applications.

Video games represent a particularly challenging domain of application since they require both real-time interaction and high levels of visual quality. Moreover, video games are developed on
a variety of platforms with completely different capacities. Automatic generation of appropriate data structures and runtime selection of optimal rendering algorithms can save companies a huge
amount of development (
*e.g.*the EAGL library used by Electronic Arts
).

More generally, interactive visualization of complex data (e.g. in scientific engineering) can be achieved only by combining various rendering accelerations (e.g. visibility culling, levels of details, etc.), an optimization task that is hard to perform “by hand” and highly data dependent. One of ARTIS' goals is to understand this dependence and automate the optimization.

Virtual heritage is a recent area which has seen spectacular growth over the past few years. Archeology and heritage exhibits are natural application areas for virtual environments and computer graphics, since they provide the ability to navigate 3D models of environments that no longer exist and can not be recorded on a videotape. Moreover, digital models and 3D renderings give the ability to enrich the navigation with annotations.

Our work on style has proved very interesting to architects who have a long habit of using hand-drawn schemas and wooden models to work and communicate. Wooden models can advantageously be replaced by 3D models inside a computer. Drawing, on the other hand, offers a higher level of interpretation and a richness of expression that are really needed by architects, for example to emphasize that such model is an hypothesis.

By investigating style analysis and expressive rendering, we could “sample” drawing styles used by architects and “apply” them to the rendering of 3D models. The computational power made available by computer assisted drawing can also lead to the development of new styles with a desired expressiveness, which would be harder to produce by hand. In particular, this approach offers the ability to navigate a 3D model while offering an expressive rendering style, raising fundamental questions on how to “animate” a style.

A system that allows to seamlessly blend virtual images generated by a computer and a video stream recorded by a digital camera (
*e.g.*a live footage) would have many applications.

In a
*virtual studio*, a TV speaker or a teacher is shot in a blue screen environment, and is shown interacting with a fine looking environment.
*Virtual prototyping*enable to review complex project avancements, using various representations, and possibly on multi-sites setups.
*Virtual medecine*enables to do repetitive riskless training, or to easily build powerfull rehabilitation setups.
*Virtual archeology*or
*achitecture*enables to visit past or futur constructions. Specific styles can be used to show various hypothesis, and real-time manipulation is a powerfull interface to interact with the
data base.

ARTIS insists on sharing the software that is developped for internal use. These are all listed in a dedicated section on the web site
http://

libQGLViewer is a library that provides tools to efficiently create new 3D viewers. Simple and common actions such as moving the camera with the mouse, saving snapshots or selecting objects
are
*not*available in standard APIs, and libQGLViewer fills this gap. It merges in a unified and complete framework the tools that every one used to develop individually. Creating a new 3D
viewer now requires 20 lines of cut-pasted code and 5 minutes. libQGLViewer is distributed under the GPL licence since January 2003, and several hundreds of downloads are recorded each month

PlantRad is a software program for computing solutions to the equation of light equilibrium in a complex scene including vegetation. The technology used is hierarchical radiosity with clustering and instantiation. Thanks to the latter, PlantRad is capable of treating scenes with a very high geometric complexity (up to millions of polygons) such as plants or any kind of vegetation scene where a high degree of approximate self-similarity permits a significant gain in memory requirements. Its main domains of applications are urban simulation, remote sensing simulation (See the collaboration with Noveltis, Toulouse) and plant growth simulation, as previously demonstrated during our collaboration with the LIAMA, Beijing.

In the context of the European project RealReflect, the ARTIS team has developed the HQR software based on the photon mapping method which is capable of solving the light balance equation and of giving a high quality solution. Through a graphical user interface, it reads X3D scenes using the X3DToolKit package developed at ARTIS, it allows the user to tune several parameters, computes photon maps, and reconstructs information to obtain a high quality solution. HQR is not yet available for download.

The MobiNet software allows creation of simple applications such as video games or pedagogic illustrations relying on intuitive graphical interface and language allowing to program a set of
mobile objects (possibly through a network). This software is available in public domain for Linux and Windows

Basilic is a tool that automates the diffusion of research results on the web. It automatically generates the publication part of a project web site, creating index pages and web pages
associated to each publication. These pages provide access to the publication itself, its abstract, associated images and movies, and anything else via web links

All bibtex related information is stored in a database that is queried on the fly to generate the pages. Everyone can very easily and quickly update the site, thus garanteeing an up-to-date
web site. BibTeX and XML exports are available, and are for instance used to generate the bibliography of this activity report. Basilic is released under the GPL licence and is freely available
for download

This program provides parsers and utility functions for the BibTeX file format. The core of the library is a C++ compiled as a library. Based on this library, bindings for different languages are provided using SWIG.

The long term goal is to replace the bibtex program and its associated BST language for style files by a more recent and powerful scripting language (such a Python, Ruby, Php, Perl...) or by
Java. The other goal is to allow easy writing of BibTeX related tools such as converters to other format. XdkBibTeX is used by Basilic to import from bibtex files. XdkBibTeX is released under
the GPL licence and is freely available for download

In 2005, we derived a complete framework for the analysis of light transport in Fourier space, providing the necessary tools and equations which permit to compute frequency informations of
the distribution of light in a scene
. Since then, we have been working on the application of these principles to some specific lighting simulation
techniques, starting with
*photon mapping*.

Whereas the main idea of photon mapping is to transport light as a density of light particles of unit energy, we also carry frequency information in photons such that it is additionnaly possible to reconstruct frequency information in the computed image. Such information permits to foresee some hard-to-predict phenomena such as the presence of shadows, the amount of bluring due to depth of field or the low variation of light across diffuse surfaces, in the specific lighting conditions of the computed image.

The application of such a scheme is to optimaly adapt the sampling in pixel space as well as in angular space (for secondary reconstruction ray) so as to avoid calculations that would ordinary be made necessary by the absence of a correct clue on frequencies, yet not necessary for the image itself. The ongoing work already provides some interesting and promising results, as showed on Figure .

This work is supported by an INRIA Internship, supporting the extended stay of Kartic Subr (University of California Irvine) and by our INRIA Associate Team with MIT.

We have developed a meshless finite element framework for solving light transport problems. Traditionally, finite element methods use basis functions, relying on a parameterization of the mesh surface. Our experience show that the creation of a suitable parameterization is difficult, error-prone and sensitive to the quality of the input geometry. Clustering methods have been developed to overcome these difficulties, but they, too, are error-prone ans sensitive to the quality of input geometry.

The resulting light transport solutions tend to exhibit discontinuities, necessitating heuristic post-processing before visulaization. Due to these problems finite element methods are rarely used in production. The core idea of our approach is to use finite element basis functions induced by hierarchical scattered data approximation techniques. This leads to a mathematically rigorous recipe for meshless finite element illumination computations. As a main advantage, our approach decouples the function spaces used for solving the transport equations from the representation of the scene geometry. The resulting solutions are accurate, exhibit no spurious discontinuities, and can be visualized directly without post-processing, while parameterization, meshing and clustering problems are avoided. The resulting methods are furthermore easy to implement.

We have demonstrated the power of our framework by describing implementations of hierarchical radiosity, glossy precomputed radiance transfer from distant illumination, and diffuse indirect precomputed transport from local light sources (see Figure ). Moreover, we have described how to directly visualize the solutions on graphics hardware. This research, done in cooperation with Jaakko Lehtinen and Janna Kontkanen of the Helsinki University of Technology and Mathias Zwicker of the University of California San Diego has been published as a Technical Report .

Ambient occlusion is used widely for improving the realism of real-time lighting simulations, in video games and in special effects for motion pictures.

We have developped a new, simple method for storing ambient occlusion values, that is very easy to implement and uses very little CPU and GPU resources. This method can be used to store and retrieve the percentage of occlusion, in combination with the average occluded direction.

This information is used to render occlusion from moving occluders, as well as to compute illumination from an environment map at a very small cost (see Figure ).

The speed of our algorithm is independent from the complexity of either the occluder or the receiver, making our algorithm highly suitable for games and other real-time applications. This
work has appeared in the
*Journal of Graphics Tools*
.

We have developed a new algorithm for interactive rendering of animated scenes with Whitted Ray-Tracing, running on the GPU. We chose to focus our attention on the secondary rays (the rays generated by one or more bounces on specular objects), and used the GPU rasterizer for primary rays. Our algorithm is based on a ray-space hierarchy, allowing us to handle truly dynamic scenes without the need to rebuild or update the scene hierarchy. The ray-space hierarchy is entirely built on the GPU for every frame, using a very fast process (less than 2 ms for building the hierarchy).

Traversing the ray-space hierarchy is also done on the GPU; one of the benefits of using a ray-space hierarchy is that we have a single shader, and a fixed number of passes. After traversing each level of the hierarchy, we prune empty branches using a stream reduction method. Our algorithm results in interactive rendering with specular reflections and shadows for moderately complex scenes (700K triangles), handles any kind of dynamic or unstructured scenes without any pre-processing, and scales well with both the scene complexity and the image resolution (see Figure ). This ray-tracing algorithm has been presented at the Eurographics Symposium on Rendering 2007 .

The fundamental step in this ray-tracing algorithm is the stream-reduction pass, to prune empty branches of the hierarchy. Although it is possible (and easy) to implement this pass using the Geometry shaders of the GPU, we have designed a faster, hierarchical algorithm that was published at the Workshop on General Purpose Processing on Graphics Processing Units .

This work was done in cooperation with Ulf Assarsson of the Chalmers University of Technology in Sweden, and was started while he was a post-doctoral student at ARTIS, funded by INRIA in 2005, as well as during David Roger stay at Chalmers U. of T. in 2006, funded by Exploradoc.

Linear perspective projections are used extensively in graphics. They provide a non-distorted view, with simple computations that map easily to hardware. Non-linear projections, such as
the view given by a fish-eye lens are also used, either for artistic reasons or in order to provide a larger field of view,
*e.g.*to approximate environment reflections or omnidirectional shadow maps. As the computations related to non-linear projections are more involved, they are harder to implement,
especially in hardware, and have found little use so far in practical applications. We have applied existing methods for non-linear projections to a specific class: non-linear projections
with a single center of projection, radial symmetry and convexity. This class includes, but is not limited to, paraboloid projections, hemi-spherical projections and fish-eye lenses.

We have shown that, for this class, the projection of a 3D triangle is a single curved triangle, and we have given a mathematical analysis of the curved edges of the triangle; this analysis allows us to reduce the computations involved, and to provide a faster implementation. The overhead for non-linearity is bearable and can be balanced with the fact that a single non-linear projection can replaces as many as five linear projections (in a hemicube), with less discontinuities and a smaller memory cost, thus making non-linear projections a practical alternative (see Figure ). This work has been accepted for publication at the I3D 2008 conference .

We have worked on sampling the visibility between two surfaces, given a set of occluding triangles, in an efficient way. We have shown that recent GPUs can used very efficiently for this task. We used bitwise arithmetics to evaluate, encode, and combine the samples blocked by each triangle.

An important point in our method is that the number of operations is almost independent of the number of samples. Our method also requires no CPU/GPU transfers, is fully implemented as geometric- vertex- and fragment-shaders, and thus does not impose to modify the way the geometry is sent to the graphics card.

We have presented applications of our method to soft shadows (see Figure ) and visibility analysis for the design of levels in video games. This work was presented at the Eurographics 2007 conference .

Computing soft shadows in real-time is a challenging problem. Computations are inherently complex. However, the human brain is relatively bad at “analysing” soft shadows, and approximate
shadows are easily accepted as realistic. Based on that observation, we can tradeoff some accuracy for some computation speed. We proposed a method for computing “plausible” soft shadows on
the GPU. The geometry of the scene is approximated by a set of planar bitmasks (slices), using the fast scene voxelisation introduced in previous section. We then use the NBuffer recently
introduced by Xavier Décoret to pre-compute convolution of this bitmasks by different kernel size. As shown by Cyril Soler in an older work, at a given point, the soft shadows caused by one
slice is given by the convolution of the slice with a kernel whose size and location depends of the point and the light source. Combining our GPU based encoding of slices and convolutions, we
are able to compute this result very efficiently in a fragment shader. The problem that remains is the combination of the shadows caused by each slice. We introduce a novel scheme, based on
probabilities, that perform significantly better than previous methods, although it is not exact. As a result, we can compute very appealing soft shadows (see Figure
) on arbitrary scenes (complex geometry, high polygon count, animated scenes) very fast, with all computations taking place
on GPU. The results have been accepted for publication in
*Computer Graphics Forum*
.

Watercolor offers a very rich medium for graphical expression. As such, it is used in a variety of applications including illustration, image processing and animation. The salient features of watercolor images, such as the brilliant colors, the subtle variation of color saturation and the visibility and texture of the underlying paper, are the result of the complex interaction of water, pigments and the support medium.

In this work, we present a method for creating watercolor-like animation, starting from video as input. The method involves two main steps: applying textures that simulate a watercolor appearance; and creating a simplified, abstracted version of the video to which the texturing operations are applied. Both of these steps are subject to highly visible temporal artifacts, so the primary technical contributions of the paper are extensions of previous methods for texturing and abstraction to provide temporal coherence when applied to video sequences. To maintain coherence for textures, we employ texture advection along lines of optical flow. We furthermore extend previous approaches by incorporating advection in both forward and reverse directions through the video, which allows for minimal texture distortion, particularly in areas of disocclusion that are otherwise highly problematic. To maintain coherence for abstraction, we employ mathematical morphology extended to the temporal domain, using filters whose temporal extents are locally controlled by the degree of distortions in the optical flow. Together, these techniques provide the first practical and robust approach for producing watercolor animations from video, which we demonstrate with a number of examples.

This work has been published in SIGGRAPH'07 conference .

Painterly rendering is a technique that takes inspiration from traditional paintings, usually focusing on effects similar to those achieved with oil or acrylic paint, where the individual brush strokes are more or less perceived individually. The main idea is to render a scene projected on the image plane by a set of 2D vector paint strokes holding style attributes (color, texture, etc). This representation has the effect of abstracting the rendering by using primitives larger than pixels, and emphasizing the the 2D nature of the image through 2D paint strokes.

In the general case, paint strokes simultaneously represent information about objects in the scene (such as shape or reflective properties of a surface from the current point of view)
while following a stroke style provided by the user (straight or curved brush strokes, thick or thin outline, etc). During the animation they also follow the 2D or 3D motion of some scene
elements. The main issues in painterly rendering originate from these conflicting goals.
**Temporal coherence**of the strokes motion is of primary interest: it comes from the desire to link the motion of a 2D primitive (a stroke) to that of a 3D primitive (e.g. a surface).
Another important aspect is the
**density**of strokes: when zooming in or out from an object, the number of strokes used to represent it must increase or decrease in order to maintain a uniform density in the picture
plane while keeping a constant thickness in image space. Finally, an ideal painterly renderer would let the user fully specify the strokes
**style**in a way that is independent from the depicted scene, but at the same time should ensure that some properties of the scene are well-represented, such as object silhouettes or
lighting.

We present an object-space, particle-based system that extends the pioneering work of Meier
. Our main contribution is a fully view- and lighting-dependent behavior that explicitly performs a trade-off
between a user-specified stroke style and the faithful representation of the depicted scene. This way, our system offers a broader expressiveness by removing the constraints usually found in
previous approaches, while still ensuring
**temporal coherence**.

This work has been published in REFIG and EGSR . A short movie rendered using this technique has been selected at the Eurographics festival on animation .

In this work, we describe a new way to render 3D scenes in a variety of non- photorealistic styles, based on patterns whose structure and motion are defined in 2D. In doing so, we sacrifice the ability of patterns that wrap onto 3D surfaces to convey shape through their structure and motion. In return, we gain several advantages, chiefly that 2D patterns are more visually abstract — a quality often sought by artists, which explains their widespread use in hand-drawn images.

Extending such styles to 3D graphics presents a challenge: how should a 2D pattern move? Our solution is to transform it each frame by a 2D similarity transform that closely follows the under- lying 3D shape. The resulting motion is often surprisingly effective, and has a striking cartoon quality that matches the visual style.

This work hase been presented at SIGGRPAH'07 .

Visual content is often better communicated by simplified or exaggerated images than by the “real world like” images. In this paper, we offer a tool for creating such enhanced representations of photographs in a way consistent with the original image content. To do so, we develop a method to identify the relevant image structures and their importance. Our approach (a) uses edges as the basic structural unit in the image, (b) proposes tools to manipulate this structure in a flexible way, and (c) employs gradient domain image processing techniques to reconstruct the final image from a “cropped” gradient information. This edge-based approach to nonphotorealistic image processing is made feasible by two new techniques we introduce: an addition to the Gaussian scale space theory to compute a perceptually meaningful hierarchy of structures, and a contrast estimation method necessary for faithful gradient-based reconstructions. We finally present various applications that manipulate image structure in different ways.

This work hase been presented at NPAR'07 and as a SIGGRAPH poster .

Clip art is a simplified illustration form consisting of layered filled polygons or closed curves used to convey 3-D shape information in a 2-D vector graphics format. This paper focuses on the problem of direct conversion of smooth surfaces, ranging from the free-form shapes of art and design to the mathematical structures of geometry and topology, into a clip art form suitable for illustration use in books, papers and presentations. We show how to represent silhouette, shadow, gleam and other surface feature curves as the intersection of implicit surfaces, and derive equations for their efficient interrogation via particle chains. We further describe how to sort, orient, identify and fill the closed regions that overlay to form clip art. We demonstrate the results with numerous renderings used to illustrate the paper itself (see Figure ).

This work was started during the ExploraDoc stay of Elmar Eisemann at University of Illinois, Urbana-Champaign, and has been accepted for publication in IEEE Transactions on Visualization and Computer Graphics .

Aurélien Martinet defended his PhD in 2007, under the supervision of Cyril Soler and Nicolas Holzschuch, working on the automatic extraction of semantic information from non coherent
geometry. This work aims at answering a recurrent need in computer graphics: most researchers work with 3D scene data into which they need high level information such as which groups of
polygons form connex shapes, human regognisable objects, have symetries, or even which groups of polygons look like each other (also known as
*instancing information*). Unfortunately such high level information is most of the time not present in 3D geometry files, either because it was lost during format conversions, or
because it was not defined the same way by the designer of the model.

The question to be solved is thus how to automatically retrieve some high level (also named
*semantic*) information from a
*polygon soup*,
*i.e*a list of polygons without any information about how these polygons are related to each other. During the passed year, Aurelien has focused on developing a new technique for
automaticaly extracting instantiation information in a scene, on the basis of the work he previously performed for extracting symmetries of objects
. Figure
shows an example of an instancing graph automatically obtained using this method: this structure is a Directed Acylic Graph
where each node is associated to a "generic object" which is instantiated in the scene, and each edge represents the geometric transformation of each instance.

In view-dependent simplification, an object is simplified so that the difference between original and simplified versions
*as seen from a given viewcelll*is bounded by a given error. The error is the maximum reprojection error, that is the distance between the projection of a point in image, and the
projection of its counterpart in the simplified version.

To guarantee an error bound, one must know how much a point can be moved from its original position to satisfy the reprojection error bound. This defines the
*validity region*of the point. Surprisingly, finding this region is a very difficult geometric problem. Elmar Eisemann worked on it during his master and found very important results.
For example, the error bound cannot be checked only at the vertices of the mesh. Also, the maximum reprojection error is not necessarily observed at one of the corner of the viewcell.
Finally, he showed how to compute exactly the validity region for the 2D case and opened the way to an extension to the 3D case. The proof is elegant and very inovative. It provides the first
exact bound on view-dependent simplification error. In contrast, previously published bounds were often only approximate (though sufficient for the considered application). The results have
been published in
*Computer Graphics Forum*
.

An industrial contract for has been done between March 2006 ans June 2007 with a movie studio in Paris name Studio Broceliande. The goal was to produce a Maya plugin that renders an animation in a watercolor style based on one of our publication . The plugin has been released and is ready for sell. A patent has been submitted.

We have a student doing a PhD thesis in cooperation with the video games company Eden Games, located in Lyon. This PhD thesis is supported by the french “CIFRE” program. The PhD student, Lionel Atty, now in his third year, is working on rendering photorealistic effects, such as soft shadows, in real-time.

We have started a cooperative research work with two video game companies in Lyon, Eden Games and WideScreen Games, in cooperation with the EVASION research team of INRIA Rhône-Alpes and the LIRIS research laboratory in Lyon. This cooperation is funded by the french “Fonds de Compétitivité des Entreprises”, the “Pole de Compétitivité” Imaginove in Lyon, the Région Rhône-Alpes, the city of Lyon and the Grand Lyon urban area. The research themes for ARTIS are global illumination in real-time for video games. This project started in September 2007, for 24 months.

We are funded by the ANR research program “Blanc” (research in generic directions) for a joint research work with the Jean Ponce (Ecole Normale Supérieure) and Adrien Bartoli (University Blaise Pascal — Clermont II), on acquisition, modelling and rendering of Image-Based Objects, with a focus on high-quality and interactive rendering. This grant started in September 2007, for 36 months.

We are funded by the MDCO (Large Datasets and Knowledge) research program of the ANR, for a joint research project with the LIRIS research laboratory (Lyon) and the LSIIT research laboratory (Strasbourg), on acquisition, rendering and relighting of real objects for their inclusion in virtual scenes. This grant started in September 2007, for 36 months.

INRIA's office of international relations has set up in 2001 a program for “associated teams” that bring together an INRIA project-team and a foreign research team, providing a form of institutional recognition to stable scientific collaborations involving several researchers on each side.

An “associated team” was created for the 2003-2005 period between ARTIS and the MIT graphics group (CSAIL Lab) on the subject of
*Flexible Rendering*. It has been extended for 2006, 2007 and 2008. This association, now in its fifth year, has been extremely positive: several research actions (described above in the
results sections) have been performed jointly with MIT researchers.

The associated team has helped this collaboration on a practical level by providing funding for researcher exchanges. The two teams know each other very well and frequent visits and conference calls make actual cooperation a reality (for instance publications , , , , , are co-signed by researchers from the two groups).

The activity of the associate team in 2007 focused on the following two items:

A visit to MIT by Emmanuel Turquin in December 2007. He was working with Jaakko Lehtinen on Meshless Finite Element Methods for Global Illumination (section ).

The development of our project on the frequency analysis of light transport, with an emphasis on the development of practical algorithms for the evaluation of frequency spectra in photon tracing (section ).

The Region Rhône-Alpes has established a grant program to help PhD students in starting actual international cooperation during their PhD years. The following actions have been supported by the program:

David Vanderhaeghe has spent 6 month working with Victor Ostromoukhov on halftoning.

Lionel Baboud is currently spending 6 months working with Natalya Tatarchuck at the AMD Research Center in Boston.

We have started a cooperation with David Salesin of the Adobe Research Center in Seattle. This cooperation has resulted in extended stays of Adrien Bousseau and Elmar Eisemann in Seattle, one publication in common , and gifts from Adobe Company for the research activity in ARTIS.

INRIA has established an internship program to finance stays from foreign students in INRIA research programs. Kartic Subr, from the University of California at Irvine, is staying at the ARTIS research project for six months, funded by the INRIA internship program, from October 2007 to March 2008, working with Cyril Soler and Nicolas Holzschuch on Frequency Analysis of Light Transport for Photon Mapping.

INRIA has established a Sabbatical program to encourage mobility by researchers. Nicolas Holzschuch is staying at the Department of Computer Science of Cornell University, for 12 months, funded by this program.

Artis is developing its links with the gaming industry in Rhone-Alpes by taking an active part to the exchanges and collaborations piloted by the
*Lyon Game*Association. This association, which is granted a “Pôle de compétitivité” works actively to favor game-related business development and academic collaboration between studios
and laboratories. Through it, Artis has a CIFRE Phd student Lionel Atty co-directed with Eden Games and working on real-time realistic rendering (see section
). Three researchers are working with Phoenix Interactive, either for short-term consulting or for long-term research
projects. One Phd student, Thierry Stein, has started his research on a subject defined after discussion with the game designer of Phoenix, Marc Albinet. Xavier Décoret is a member of the
scientific committee of the “Pôle de compétitivité”.

The proper diffusion of scientific results is an important part of their value. Since most of this diffusion is done using the web, a new bibliography server has been developed to ease this
diffusion

Most of the members of the team (faculty members as well as Ph. D. students) give courses. This educational effort is also present in the distribution of libraries such as libQGLViewer, which have a real pedagogical interest since they simplify and explain the creation of computer graphics images. The project is also involved in the animation of the “fete de la science” (scientific vulgarization event), and is often consulted for its scientific expertise.

Freestyle is a software for Non-Photorealistic Line Drawing rendering from 3D scenes. It is designed as a programmable interface to allow maximum control over the style of the final drawing: the user "programs" how the silhouettes and other feature lines from the 3D model should be turned into stylized strokes using a set of programmable operators dedicated to style description. This programmable approach, inspired by the shading languages available in photorealistic renderers such as Pixar's RenderMan, overcomes the limitations of integrated software with access to a limited number of parameters and permits the design of an infinite variety of rich and complex styles. The system currently focuses on pure line drawing as a first step. The style description language is Python augmented with our set of operators. Freestyle was developed in the framework of a research project dedicated to the study of stylized line drawing rendering from 3D scenes. This research has lead to two publications , . This software is distributed under the terms of the GPL License.

Nicolas Holzschuch is :

Organiser of the Eurographics Symposium on Rendering 2007 in Grenoble,

Member of the program committee of EGSR 2007 and EGSR 2008,

Member of the program committee of VMV 2007,

Member of the program committee of Pacific Graphics 2008,

Joëlle Thollot is:

Member of the program committee of NPAR'07, EG'08, chair of NPAR'07 poster session

Member of the “Commission de Spécialistes” of INPG,

Member of the “Conseil d'Administration” of ENSIMAG,

Member of the “Comité d'UR” of INRIA Rhône-Alpes.