EN FR
EN FR


Section: Overall Objectives

General Introduction

Computer generated images are ubiquitous in our everyday life. Such images are the result of a process that has seldom changed over the years: the optical phenomena due to the propagation of light in a 3D environment are simulated taking into account how light is scattered  [58] , [36] according to shape and material characteristics of objects. The intersection of optics (for the underlying laws of physics) and computer science (for its modeling and computational efficiency aspects) provides a unique opportunity to tighten the links between these domains in order to first improve the image generation process (computer graphics, optics and virtual reality) and next to develop new acquisition and display technologies (optics, mixed reality and machine vision).

Most of the time, light, shape, and matter properties are studied, acquired, and modeled separately, relying on realistic or stylized rendering processes to combine them in order to create final pixel colors. Such modularity, inherited from classical physics, has the practical advantage of permitting to reuse the same models in various contexts. However, independent developments lead to un-optimized pipelines and difficult-to-control solutions since it is often not clear which part of the expected result is caused by which property. Indeed, the most efficient solutions are most often the ones that blur the frontiers between light, shape, and matter to lead to specialized and optimized pipelines, as in real-time applications (like Bidirectional Texture Functions  [66] and Light-Field rendering  [34] ). Keeping these three properties separated may lead to other problems. For instance:

  • Measured materials are too detailed to be usable in rendering systems and data reduction techniques have to be developed  [64] , [67] , leading to an inefficient transfer between real and digital worlds;

  • It is currently extremely challenging (if not impossible) to directly control or manipulate the interactions between light, shape, and matter. Accurate lighting processes may create solutions that do not fulfill users' expectations;

  • Artists can spend hours and days in modeling highly complex surfaces whose details will not be visible  [85] due to inappropriate use of certain light sources or reflection properties.

Most traditional applications target human observers. Depending on how deep we take into account the specificity of each user, the requirement of representations, and algorithms may differ.

Figure 2. Examples of new display technologies. Nowadays, they are not limited to a simple array of 2D low-dynamic RGB values.
IMG/3DS.jpgIMG/BrightSide.jpgIMG/hasan.jpg
Auto-stereoscopy display HDR display Printing both geometry and material
©Nintendo ©Dolby Digital [50]

With the evolution of measurement and display technologies that go beyond conventional images (e.g., as illustrated in Figure 2 , High-Dynamic Range Imaging  [76] , stereo displays or new display technologies  [54] , and physical fabrication  [26] , [42] , [50] ) the frontiers between real and virtual worlds are vanishing  [38] . In this context, a sensor combined with computational capabilities may also be considered as another kind of observer. Creating separate models for light, shape, and matter for such an extended range of applications and observers is often inefficient and sometimes provides unexpected results. Pertinent solutions must be able to take into account properties of the observer (human or machine) and application goals.