Section: New Results

Axis 3: Rendering, Visualization and Illustration

Real-Time Sampling from Captured Environment Map

Participants: H. Lu, R. Pacanowski, X. Granier

Figure 11. Time-varying light samples distribution for one pixel (cyan dot) on the dragon model when lit with a dynamic environment map  [95] . This example runs in average at 145 fps using Multiple Importance Sampling with 50 samples for the Lafortune energy conserving Phong BRDF with a shininess exponent set to 150.
IMG/D0a-new.png IMG/D1a-new.png

We have introduced [23] a simple and effective technique for light-based importance sampling of dynamic environment maps based on the formalism of Multiple Importance Sampling (MIS). The core idea is to balance per pixel the number of samples selected on each cube map face according to a quick and conservative evaluation of the lighting contribution: this increases the number of effective samples. In order to be suitable for dynamically generated or captured HDR environment maps, everything is computed on-line for each frame without any global preprocessing. Our MIS formalism can be easily extended to other strategies such as BRDF importance sampling.

Screen-Space Curvature for Production-Quality Rendering and Compositing

Participants: N. Mellado, P. Barla, G. Guennebaud, P. Reuter

Curvature is commonly employed for enhancing details in textured 3D models, or to modulate shading at the rendering or compositing stage. However, existing methods that compute curvature in object space rely on mesh-based surfaces and work at the vertex level. Consequently, they are not well adapted to production-quality models that rely on either subdivision surfaces with displacement and bump maps, or on implicit and procedural representations. In practice they would require a view-dependent scene discretization at each frame, to adapt geometry to visible details and avoid aliasing artifacts. Our approach [24] is independent of both scene complexity and the choice of surface representations since it computes mean curvature from scratch at each frame in screen-space. It works without any pre-process and provides a controllable screen-space scale parameter, which makes it ideal for production requirements, either during rendering or compositing.

Smooth Surface Contours with Accurate Topology

Participant: P. Bénard

Computing the visible contours of a smooth 3D surface is a surprisingly difficult problem, and previous methods are prone to topological errors, such as gaps in the outline. Our approach [13] is to generate, for each viewpoint, a new triangle mesh with contours that are topologically-equivalent and geometrically close to those of the original smooth surface. The contours of the mesh can then be rendered with exact visibility. The core of the approach is Contour-Consistency, a way to prove topological equivalence between the contours of two surfaces. Producing a surface tessellation that satisfies this property is itself challenging; to this end, we introduce a type of triangle that ensures consistency at the contour. We then introduce an iterative mesh generation procedure, based on these ideas. This procedure does not fully guarantee consistency, but errors are not noticeable in our experiments. Our algorithm can operate on any smooth input surface representation; we use Catmull-Clark subdivision surfaces in our implementation. We demonstrate results computing contours of complex 3D objects, on which our method eliminates the contour artifacts of other methods.