EN FR
EN FR


Section: Research Program

Axis 2: From Acquisition to Display

Challenge: Convergence of optical and digital systems to blend real and virtual worlds.

Results: Instruments to acquire real world, to display virtual world, and to make both of them interact.

Figure 5. Light-Field transfer: global illumination between real and synthetic objects  [38]
IMG/LFTransfer1.png IMG/LFTransfer2.png

In this axis, we investigate unified acquisition and display systems, that is systems which combine optical instruments with digital processing. From digital to real, we investigate new display approaches  [67], [54]. We consider projecting systems and surfaces  [35], for personal use, virtual reality and augmented reality  [30]. From the real world to the digital world, we favor direct measurements of parameters for models and representations, using (new) optical systems unless digitization is required  [48], [47]. These resulting systems have to acquire the different phenomena described in Axis 1 and to display them, in an efficient manner  [52], [29], [53], [56]. By efficient, we mean that we want to shorten the path between the real world and the virtual world by increasing the data bandwidth between the real (analog) and the virtual (digital) worlds, and by reducing the latency for real-time interactions (we have to prevent unnecessary conversions, and to reduce processing time). To reach this goal, the systems have to be designed as a whole, not by a simple concatenation of optical systems and digital processes, nor by considering each component independently  [57].

To increase data bandwidth, one solution is to parallelize more and more the physical systems. One possible solution is to multiply the number of simultaneous acquisitions (e.g., simultaneous images from multiple viewpoints  [56], [75]). Similarly, increasing the number of viewpoints is a way toward the creation of full 3D displays  [67]. However, full acquisition or display of 3D real environments theoretically requires a continuous field of viewpoints, leading to huge data size. Despite the current belief that the increase of computational power will fill the missing gap, when it comes to visual or physical realism, if you double the processing power, people may want four times more accuracy, thus increasing data size as well. To reach the best performances, a trade-off has to be found between the amount of data required to represent accurately the reality and the amount of required processing. This trade-off may be achieved using compressive sensing. Compressive sensing is a new trend issued from the applied mathematics community that provides tools to accurately reconstruct a signal from a small set of measurements assuming that it is sparse in a transform domain (e.g., [74], [99]).

We prefer to achieve this goal by avoiding as much as possible the classical approach where acquisition is followed by a fitting step: this requires in general a large amount of measurements and the fitting itself may consume consequently too much memory and preprocessing time. By preventing unnecessary conversion through fitting techniques, such an approach increase the speed and reduce the data transfer for acquisition but also for display. One of the best recent examples is the work of Cossairt et al.  [38]. The whole system is designed around a unique representation of the energy-field issued from (or leaving) a 3D object, either virtual or real: the Light-Field. A Light-Field encodes the light emitted in any direction from any position on an object. It is acquired thanks to a lens-array that leads to the capture of, and projection from, multiple simultaneous viewpoints. A unique representation is used for all the steps of this system. Lens-arrays, parallax barriers, and coded-aperture  [64] are one of the key technologies to develop such acquisition (e.g., Light-Field camera (Lytro, http://www.lytro.com/[57] and acquisition of light-sources  [48]), projection systems (e.g., auto-stereoscopic displays). Such an approach is versatile and may be applied to improve classical optical instruments  [62]. More generally, by designing unified optical and digital systems  [71], it is possible to leverage the requirement of processing power, the memory footprint, and the cost of optical instruments.

Those are only some examples of what we investigate. We also consider the following approaches to develop new unified systems. First, similar to (and based on) the analysis goal of Axis 1, we have to take into account as much as possible the characteristics of the measurement setup. For instance, when fitting cannot be avoided, integrating them may improve both the processing efficiency and accuracy  [73]. Second, we have to integrate signals from multiple sensors (such as GPS, accelerometer, ...) to prevent some computation (e.g., [65]). Finally, the experience of the group in surface modeling help the design of optical surfaces  [60] for light sources or head-mounted displays.