Section: Overall Objectives

General Introduction

Computer generated images are ubiquitous in our everyday life. Such images are the result of a process that has seldom changed over the years: the optical phenomena due to the propagation of light in a 3D environment are simulated taking into account how light is scattered  [62], [39] according to shape and material characteristics of objects. The intersection of optics (for the underlying laws of physics) and computer science (for its modeling and computational efficiency aspects) provides a unique opportunity to tighten the links between these domains in order to first improve the image generation process (computer graphics, optics and virtual reality) and next to develop new acquisition and display technologies (optics, mixed reality and machine vision).

Most of the time, light, shape, and matter properties are studied, acquired, and modeled separately, relying on realistic or stylized rendering processes to combine them in order to create final pixel colors. Such modularity, inherited from classical physics, has the practical advantage of permitting to reuse the same models in various contexts. However, independent developments lead to un-optimized pipelines and difficult-to-control solutions since it is often not clear which part of the expected result is caused by which property. Indeed, the most efficient solutions are most often the ones that blur the frontiers between light, shape, and matter to lead to specialized and optimized pipelines, as in real-time applications (like Bidirectional Texture Functions  [75] and Light-Field rendering  [37]). Keeping these three properties separated may lead to other problems. For instance:

  • Measured materials are too detailed to be usable in rendering systems and data reduction techniques have to be developed  [72], [76], leading to an inefficient transfer between real and digital worlds;

  • It is currently extremely challenging (if not impossible) to directly control or manipulate the interactions between light, shape, and matter. Accurate lighting processes may create solutions that do not fulfill users' expectations;

  • Artists can spend hours and days in modeling highly complex surfaces whose details will not be visible  [97] due to inappropriate use of certain light sources or reflection properties.

Most traditional applications target human observers. Depending on how deep we take into account the specificity of each user, the requirement of representations, and algorithms may differ.

Figure 1. Examples of new display technologies. Nowadays, they are not limited to a simple array of 2D low-dynamic RGB values.
IMG/3DS.png IMG/BrightSide.png IMG/hasan.png
Auto-stereoscopy display HDR display Printing both geometry and material
©Nintendo ©Dolby Digital [54]

With the evolution of measurement and display technologies that go beyond conventional images (e.g., as illustrated in Figure 1, High-Dynamic Range Imaging  [87], stereo displays or new display technologies  [58], and physical fabrication  [28], [45], [54]) the frontiers between real and virtual worlds are vanishing  [41]. In this context, a sensor combined with computational capabilities may also be considered as another kind of observer. Creating separate models for light, shape, and matter for such an extended range of applications and observers is often inefficient and sometimes provides unexpected results. Pertinent solutions must be able to take into account properties of the observer (human or machine) and application goals.