EN FR
EN FR


Section: New Software and Platforms

Platforms

Light field editor

Participants : Pierre Allain, Laurent Guillo, Christine Guillemot.

As part of the ERC Clim project, the EPI Sirocco is developing a light field editor, a tool analogous to traditional image editors such as the GNU image manipulation program Gimp or the raster graphic editor Photoshop but dedicated to light fields. As input data, this tool accepts for instance sparse light fields acquired with High Density Camera Arrays (HDCA) or denser light fields captured with microlens array (MLA). Two kinds of features are provided. Traditional features such as changing the angle of view, refocusing or depth map extraction are or will be soon supported. More advanced features are being integrated in our tool as libraries we have developed, such as segmenation or inpainting. For instance, a segmentation on a specific subaperture/view of light fields can be propagated to all subapertures/views. Thus, the so-segmented objects or zones can be colourized or even removed, the emptied zone being then inpainted. The tool and libraries are developed in C++ and the graphical user interface relies on Qt.

Acquisition of multi-view sequences for Free viewpoint Television

Participants : Cédric Le Cam, Laurent Guillo, Thomas Maugey.

The scientific and industrial community is nowadays exploring new multimedia applications using 3D data (beyond stereoscopy). In particular, Free Viewpoint Television (FTV) has attracted much attention in the recent years. In those systems, user can choose in real time its view angle from which he wants to observe the scene. Despite the great interest for FTV, the lack of realistic and ambitious datasets penalizes the research effort. The acquisition of such sequences is very costly in terms of hardware and working effort, which explains why no multi-view videos suitable for FTV has been proposed yet.

In 2017, in the context of the project ADT ATeP (funded by Inriahub), such datasets have been acquired and some calibration tools have been developed. First 40 omnidirectional cameras and their associated equipments have been acquired by the team (thanks to Rennes Metropole funding). We have first focused on the calibration of this camera, i.e., the development of the relationship between a 3D point and its projection in the omnidirectional image. In particular, we have shown that the unified spherical model fits the acquired omnidirectional cameras. Second, we have developed tools to calibrate the cameras in relation to each other. Finally, we have made a capture of 3 multiview sequences that are currently in preparation for a sharing with the community (Fig. 1). This work has been published in [41].

Figure 1. The 40 omnidirectional cameras positioned for an indoor scene capture.
IMG/atep.png

Light fields datasets

Participants : Pierre Allain, Christine Guillemot, Laurent Guillo.

The EPI Sirocco makes extensive use of light field datasets with sparse or dense contents provided by the scientific community to run tests. However,it has also generated its own natural and synthetic contents.

Natural content has been created with Lytro cameras (the original first generation Lytro and the Lytro Illum) and is already available to the community (https://www.irisa.fr/temics/demos/lightField/CLIM/DataSoftware.html). The team also owns a R8 Raytrix plenoptic cameras with which still and video contents have been captured. Applications taking advantage of the Raytrix API have been developed to extract views from the Raytrix lightfield. The number of views per frame is configurable and can be set for instance to 3x3 or 9x9 according to the desired sparsity.

Synthetic content has been generated from the Sintel film (https://durian.blender.org/download/), which is a short computer animated film by the Blender institute, part of the Blender Foundation. A specific Blender add-on is used to extract views from a frame. As previously, the number of views is configurable. Synthetic contents present the advantage to provide a ground truth useful to evaluate how efficient our algorithms are to compute, for instance, the depth maps.