EN FR
EN FR


Section: New Results

Distributed processing and robust communication

Loss concealment based on video inpainting

Participants : Mounira Ebdelli, Christine Guillemot, Olivier Le Meur.

In 2011, we have started developing a loss concealment scheme based on a new video examplar-based inpainting algorithm. The developed video inpainting approach relies on new patch priority functions as welll as on a motion confidence-aided neighbor embedding techniques. Neighbor embedding approaches aim at approximating input vectors (or data points) as a linear combination of their neighbors. The search of the weights of the linear combination (i.e. of the embedding) are formulated as constrained least squares problems. When using the locally linear embedding, the constraint is that the sum of the weights is equal to 1. We have also considered non-negative matrix factorization to solve the problem, in which case the constraint is that the weights and the other vector are non-negative. The motion confidence introduced in the neighbor embedding improves the robustness of the algorithm in the sense that it limits the error propgation effects which otherwise result from uncertainties on the motion information of the unknown pixels to be estimated. A new patch similarity measure which accounts for the correlation between motion information has been defined for the K-NN search inherent to neighbor embedding techniques. Evaluations of the algorithm in a context of video editing (object removal) are on-going. The next step will be to assess the performance of the approach in a context of loss concealment, that is to estimate unknown pixels after decoding when the corersponding transport packets have been lost on the transmission network.

Unequal Erasure Protection and Object Bundle Protection

Participant : Aline Roumy.

In 2011, we started a new collaboration in the framework of the Joint INRIA/Alcatel Lucent lab. In this work, carried out with V. Roca (Planete, INRIA), B. Sayadi and R. Imad (Alcatel Lucent), we proposed and analyzed a novel technique capable of providing both an unequal erasure protection service and an object bundle protection service.

Unequal Erasure Protection: When a data flow contains information of different priority levels, it is natural to try to offer an unequal protection where the high priority data benefits from a higher protection than the rest of data. In this work we focused on the “erasure channel", for instance the Internet where the UDP/IP datagram integrity is guaranteed by the physical layer FCS (or CRC) and the UDP checksum. In this context UEP refers to an Unequal Erasure Protection (rather than Error) and the FEC code being used is one of the various Application-Layer Forward Erasure Correction (AL-FEC) codes that have been designed and standardized in the past years, like Reed-Solomon, one of the LDPC variants, or Raptor(Q) codes. Offering an unequal protection in this context can be achieved by one of the following three general approaches: by using dedicated UEP-aware FEC codes, by using a dedicated UEP-aware packetization scheme, or by using an UEP-aware signaling scheme. In this work we ignored the first approach as we wanted to reuse existing AL-FEC codes. Instead we focused on and compared the last two approaches and more precisely the well known Priority Encoding Transmission (PET) scheme that belongs to the UEP-aware packetization category and a Generalized Object Encoding (GOE) scheme, we proposed [53] , that belongs to the UEP-aware signaling category. Through a careful modeling of both proposals [55] , whose accuracy has been confirmed by simulations, we have demonstrated that the protection performance (i.e. erasure resiliency and average decoding delay) of both approaches are equivalent, not only asymptotically but also in finite length conditions. In fact the key differences between these approaches become apparent when applying them in practical systems. Such metrics as the simplicity of the solution, the number of packets processed, the maximum memory requirements, the number of FEC encoding and decodings, as well as the system of linear equations complexity (number of variables) are in favor of the GOE approach.

Object Bundle Protection: we considered the use of PET, more precisely an extension called Universal Object Delivery (UOD), and GOE in situations where one needs to send a bundle of small object (e.g. files). If both solutions can address this need, we showed that once again the GOE scheme is highly recommendable for practical realizations. This is mostly due to the lack of flexibility of the PET/UOD approach. For instance the limited size of a packet creates an upper bound to the number of objects that can be considered together (e.g. UOD limits this number to 255), the symbol size has a coarse granularity (e.g. UOD requires symbols to be multiple of 4 bytes when used with RaptorQ codes) which can create rounding problems with certain sets of objects (i.e. the actual packet size may be significantly shorter than the target, and/or the actual code rate significantly different than its target). GOE has no such constraints. In particular GOE offers the possibility to adjust the packet interleaving to the use-case and channel erasure features. One can easily trade robustness in front of long erasure bursts for very short decoding delays of high priority objects and low memory requirements, which can be a key asset in case of small, lightweight terminals or timely delivery services. This feature may be sufficiently important to justify by itself the use of a GOE FEC Scheme [55] .

Distributed compressed sensing

Participants : Aline Roumy, Velotiaray Toto-Zarasoa.

This work has been performed in collaboration with E. Magli and G. Coluccia (Politecnico di Torino) in the framework of the FP7 IST NOE NEWCOM++ (Jan. 2008 - Apr. 2011). A new lossy compression scheme for distributed and sparse sources under a low complexity encoding constraint has been proposed in [26] . This problem naturally arises in wireless sensor networks. Indeed, nodes of a sensor network may acquire temperature readings over time. The temperature may vary slowly over time, and hence consecutive readings have similar values. However, they also have inter-sensor correlation, as the sensors may be in the same room, in which the temperature is rather uniform. The question hence arises of how to exploit intra- and inter-sensor correlations without communication between the sensors and with a low complexity acquisition process in order to save energy consumption at the sensor. Therefore, we consider continuous, correlated, distributed and sparse (in some domain) sources and perform lossy universal compression under a low encoding complexity constraint.

In order to meet the low complexity encoding constraint, the encoding stage is performed by a lossy distributed compressed sensing (CS). More precisely, the proposed architecture is based on the joint use of CS to capture memory of a signal, and DSC to take advantage of inter-sensor correlations. First, we showed that the resilience of CS to quantization error also holds in the distributed setup. Moreover, the optimal number of measurements can be chosen as the one guaranteeing (close-to-)perfect reconstruction. In addition, using joint decoding, dequantization and reconstruction techniques allows to boost performance even further. The joint use of CS and DSC allows to save 1.18 bit per source sample for the same PSNR quality w.r.t. the non-distributed CS scheme. Compared to the DSC scheme (without CS), we observe a gain increasing with the rate for the same PSNR quality. All these results makes the proposed scheme an attractive choice for environments such as sensor networks, in which the devices performing acquisition and processing are severely constrained in terms of energy and computations.

Super-resolution as a communication tool

Participants : Marco Bevilacqua, Christine Guillemot, Raul Martinez-Noriega, Aline Roumy.

In 2011, we started a new collaboration in the framework of the Joint INRIA/Alcatel Lucent lab. In this work, carried out with M-L. Alberi (Alcatel Lucent), we proposed a novel technique capable of producing a high-resolution (HR) image from a single low-resolution (LR) image. This method that belongs to the class of single-image super-resolution (SR), offers the promise of overcoming the inherent limitations of the video acquisition and transmission systems. More precisely, one can think of sending a low resolution video to adapt to the complexity constraint of the encoder and/or the bandwidth limitation of the network, while the decoder reconstructs a high-resolution video.

As a first step toward the more ambitious goal of compressing video through SR, we proposed a novel method for single-image super-resolution based on a neighbor embedding technique. Each low-resolution input patch is approximated by a linear combination of nearest neighbors taken from a dictionary. This dictionary stores low-resolution and corresponding high-resolution patches taken from natural images and is thus used to infer the HR details of the super-resolved image. The entire neighbor embedding procedure is carried out in a feature space. Features which are either the gradient values of the pixels or the mean-subtracted luminance values are extracted from the LR input patches, and from the LR and HR patches stored in the dictionary. The algorithm thus searches for the K nearest neighbors of the feature vector of the LR input patch and then computes the weights for approximating the input feature vector. The so-obtained weights are finally used to compute a linear combination of the corresponding HR patches, which yields the super-resolved image. The use of a positive constraint for computing the weights of the linear approximation is shown to have a more stable behavior than the use of sum-to-one constraint and lead to significantly higher PSNR values for the super-resolved images.