EN FR
EN FR


Section: Research Program

Coding theory

OPTA limit (Optimum Performance Theoretically Attainable), Rate allocation, Rate-Distortion optimization, lossy coding, joint source-channel coding multiple description coding, channel modelization, oversampled frame expansions, error correcting codes.

Source coding and channel coding theory (T. M. Cover and J. A. Thomas, Elements of Information Theory, Second Edition, July 2006.) is central to our compression and communication activities, in particular to the design of entropy codes and of error correcting codes. Another field in coding theory which has emerged in the context of sensor networks is Distributed Source Coding (DSC). It refers to the compression of correlated signals captured by different sensors which do not communicate between themselves. All the signals captured are compressed independently and transmitted to a central base station which has the capability to decode them jointly. DSC finds its foundation in the seminal Slepian-Wolf (D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources.” IEEE Transactions on Information Theory, 19(4), pp. 471-480, July 1973.) (SW) and Wyner-Ziv (A. Wyner and J. Ziv, “The rate-distortion function for source coding ith side information at the decoder.” IEEE Transactions on Information Theory, pp. 1-10, January 1976.) (WZ) theorems. Let us consider two binary correlated sources X and Y. If the two coders communicate, it is well known from Shannon's theory that the minimum lossless rate for X and Y is given by the joint entropy H(X,Y). Slepian and Wolf have established in 1973 that this lossless compression rate bound can be approached with a vanishing error probability for long sequences, even if the two sources are coded separately, provided that they are decoded jointly and that their correlation is known to both the encoder and the decoder.

In 1976, Wyner and Ziv considered the problem of coding of two correlated sources X and Y, with respect to a fidelity criterion. They have established the rate-distortion function R*X|Y(D) for the case where the side information Y is perfectly known to the decoder only. For a given target distortion D, R*X|Y(D) in general verifies RX|Y(D)R*X|Y(D)RX(D), where RX|Y(D) is the rate required to encode X if Y is available to both the encoder and the decoder, and RX is the minimal rate for encoding X without SI. These results give achievable rate bounds, however the design of codes and practical solutions for compression and communication applications remain a widely open issue.